Teleradiology AI’s Mixed Bag

An AI algorithm that examined teleradiology studies for signs of intracranial hemorrhage had mixed performance in a new study in Radiology: Artificial Intelligence. AI helped detect ICH cases that might have been missed, but false positives slowed radiologists down. 

AI is being touted as a tool that can detect unseen pathology and speed up the workflow of radiologists facing an environment of limited resources and growing image volume.

  • This dynamic is particularly evident at teleradiology practices, which frequently see high volumes during off-hour shifts; indeed, a recent study found that telerad cases had higher rates of patient death and more malpractice claims than cases read by traditional radiology practices.

So teleradiologists could use a bit more help. In the new study, researchers from the VA’s National Teleradiology Program assessed Avicenna.ai’s CINA v1.0 algorithm for detecting ICH on STAT non-contrast head CT studies.

  • AI was used to analyze 58.3k CT exams processed by the teleradiology service from January 2023 to February 2024, with a 2.7% prevalence of ICH.

Results were as follows

  • AI flagged 5.7k studies as positive for acute ICH and 52.7k as negative
  • Final radiology reports confirmed that 1.2k exams were true positives for a sensitivity of 76% and a positive predictive value of 21%
  • There were 384 false negatives (missed ICH cases), for a specificity of 92% and a negative predictive value of 99.3%
  • The algorithm’s performance at the VA was a bit lower than in previously published literature
  • Cases that the algorithm falsely flagged as positive took over a minute longer to interpret than prior to AI deployment
  • Overall, case interpretation times were slightly lower after AI than before

One issue to note is that the CINA algorithm is not intended for small hemorrhages with volumes < 3 mL; the researchers did not exclude these cases from their analysis, which could have reduced its performance.

  • Also, at 2.7% the VA’s teleradiology program ICH prevalence was lower than the 10% prevalence Avicenna has used to rate its performance.

The Takeaway

The new findings aren’t exactly a slam dunk for AI in the teleradiology setting, but in terms of real-world results they are exactly what’s needed to assess the true value of the technology compared to outcomes in more tightly controlled environments.

6 Solutions to the RT Shortage

Earlier this week, we described the looming shortage of radiologists in the US; this week the focus turns to radiologic technologists. A new report from the ASRT and other groups suggests the shortage of RT positions is severe, but offers some solutions. 

The healthcare industry has suffered in the post-COVID era as the need for medical services has surged due to the aging population while the number of personnel has dropped as staff leave because of retirement, burnout, and other reasons.

  • At the same time, fewer trainees are entering healthcare, a phenomenon that’s particularly problematic with allied health personnel like nurses and technologists. 

The numbers are dire, based on previously collected data …

  • Vacancy rates for all medical imaging and radiation therapy professionals are at the highest levels since the ASRT began tracking staffing in 2003
  • The radiographer vacancy rate nearly tripled in 2023 compared to 2021 (18% vs. 6.2%)
  • The number of people taking the ARRT’s radiography certification exam in 2022 fell 18% compared to 2006 (14.3k vs. 17.5k)

To address the problem, ASRT collaborated with 17 other radiological sciences groups including ARRT and JRCERT to first conduct a survey of 8.7k medical imaging and radiation therapy professionals to assess their work environment. 

  • The groups then convened a two-day meeting in February at ASRT headquarters in Albuquerque, New Mexico. 

They agreed on six major solutions to address the workforce crisis …

  • Raise awareness through campaigns such as via social media to attract new students
  • Articulate clear career pathways so professionals can choose careers in clinical practice, management, or education at different levels and roles. This would include a new entry-level role, imaging medical aide (IMA), that would be offered by high schools and community colleges as a stepping stone to RT status
  • Create a pipeline from educational programs to the workplace, and make AI a mandatory part of the educational curriculum
  • Build a career ladder that defines different clinical titles for professionals in clinical and leadership roles 
  • Expand educational opportunities such as in rural and underserved communities, and create a one-stop-shop portal for educators
  • Improve workplace satisfaction through tools such as awards programs and CE opportunities on workplace satisfaction

The Takeaway

Trying to work against powerful demographic trends can sometimes seem like swimming upstream. But the new report is a good first start toward a more organized and unified response to the radiologic technologist staffing shortage.

Radiologist Shortage Looms

A new report from healthcare staffing firm Medicus Healthcare Solutions paints a gloomy picture of the demographic crush facing radiology as the US population ages and imaging volumes rise, but the number of radiologists remains static. 

Radiology’s demographic dilemma isn’t new to anyone in the field. Radiologists are having to work harder to meet growing demand for imaging by an aging population, while reimbursement falls.

  • Meanwhile, efforts to grow the number of radiologists are hamstrung by the country’s physician training system, which requires a literal act of Congress in order to expand the number of residency slots

The new Medicus report mostly draws on established data sources, but it provides insight into the supply and demand challenges facing radiology, presented in an attractive graphical format. Salient points include …

  • There are about 37.7k diagnostic radiologists in the US, with job growth of 4% annually through 2032
  • Since 2020 there have been only 22 new diagnostic radiology residency PGY-1 positions added
  • From 2010 to 2020, the number of diagnostic radiology trainees grew 2.5%, while the number of US adults over 65 rose 34%
  • By 2030, all baby boomers will be aged 65 and older – and will require more medical care
  • The gap between radiology supply and demand is expected to grow through 2034 (see above chart)

What’s more, the vast majority of radiologists reaching retirement age are generalists, while the field’s recent focus on subspecialization means many younger radiologists aren’t comfortable reading scans outside their focus. 

The Medicus report isn’t all doom and gloom. It does offer some possible solutions to the staffing shortage, including teleradiology, AI, and increased use of locums tenens radiologist services (which Medicus provides). 

The Takeaway

The Medicus report provides a snapshot of a medical specialty that – like many others – is facing a demographic crunch between rising demand and fixed supply. Hopefully, technologies like AI will enable radiologists to do more with less in the years to come.

CT Colonography Breakthrough

In a major news development this week, CMS proposed to begin Medicare coverage of CT colonography screening – also known as virtual colonoscopy – starting in 2025. The move will give radiology an entree into another of the major cancer screening tests. 

CT colonography has been around for over 30 years as an imaging-based alternative to optical colonoscopy for colorectal cancer screening that produces a virtual fly-through of a patient’s colon that can detect pre-cancerous polyps.

  • CTC has a number of advantages over traditional colonoscopy: patients don’t need to be sedated, and there is lower risk of complications such as bowel perforation. 

But CTC has struggled to gain wider acceptance in the face of fierce resistance from gastroenterologists. 

  • Gastroenterologists typically prefer to steer their patients to optical colonoscopy for cancer screening rather than refer them out for imaging exams.

The USPSTF in 2016 added CT colonography to its list of recommended cancer screening exams. 

  • This led to a 50% jump in virtual colonoscopy exams performed for privately insured patients. 

But as anyone who follows the US healthcare system knows, Medicare is the big enchilada when it comes to reimbursement, and the gastroenterology community has successfully fought off efforts to secure broader payment.

  • This comes in spite of clinical studies showing CT colonography’s effectiveness, and even the widely reported case of President Barack Obama undergoing a CTC screening exam in 2010 as part of his annual physical because it didn’t require sedation.  

But enough ancient history, on to this week’s news. In a proposed rule for the 2025 HOPPS issued on July 10, CMS proposed the following:

  • Remove coverage for barium enema for colorectal cancer screening, as it “no longer meets modern clinical standards”
  • Add coverage for CT colonography, creating Ambulatory Payment Classification (APC) 74261 for CTC without contrast and 74262 for CTC with contrast
  • Reassign CPT code 74263 for CTC/VC from “not payable” to “payable” status 

The Takeaway

This week’s news is a huge win for radiology and indicates that gastroenterology’s stranglehold on colorectal cancer screening is finally beginning to crack. Imaging facilities should begin preparing to offer CT colonography as a less invasive option to optical colonoscopy for Medicare beneficiaries.

Top 6 Radiology Trends of 2024’s First Half

You can put the first half of 2024 in the books … and it was full of major developments for radiology. What follows are the top six trends in medical imaging – one for each month of the first half.

  • The Rise of AI for Breast Screening – The first half of 2024 saw the publication of studies conducted in Norway and Denmark that underlined the potential role of AI for breast screening, particularly for ruling out exams most likely to be normal. But research conducted within Europe’s paradigm of double-reading workflow for 2D mammograms may not be so relevant in the US, and more studies are needed.
  • Mammography Guideline Controversy – Changes to breast screening guidelines in both the US and Canada were first-half headlines. In the US, the USPSTF made official its proposal to lower to 40 the recommended age to start screening, but many were disappointed it failed to provide stronger guidance on dense breast screening. Things were even worse in Canada, where a federal task force declined to lower the screening age from 50 to 40. Canadian advocates have vowed to fight on at the provincial level. 
  • AI Funding Pullback Continues – The ongoing pullback in venture capital funding for AI developers continues. A study by Signify Research found that not only did VC funding fall 19% in 2023, but it got off to a slow start in 2024 as well. The new environment could be putting more pressure on AI firms to demonstrate ROI to both healthcare providers and investors, while also having broader implications – a major AI conference rescheduled a show that had been on the calendar for May, citing “market conditions.” On the positive side, Tempus AI’s IPO boomed, raising $412M
  • Opportunistic Screening Gains Steam – The concept of opportunistic screening – detecting pathology on medical images acquired for other indications – has been around for a while. But it’s only really started to catch on with the development of AI algorithms that can process thousands of images without a radiologist’s involvement. The first half of 2024 saw publication of several exciting studies for indications including detecting osteoporosis, scoring coronary artery calcifications, and predicting major adverse cardiac events
  • ChatGPT Frenzy Subsides – The frenzied interest in ChatGPT and other generative AI large language models seen throughout 2023 seemed to subside in the first half of 2024. A quick search of The Imaging Wire archives, for example, finds just four references to ChatGPT in the first six months of 2024 compared to 21 citations at the same point in 2023. LLM developers need to address major issues – from GenAI’s “hallucination effect” to potential misuse of the technology – before LLMs can be used in clinical settings.

The Takeaway

The midpoint of the year is a great time to take stock of radiology’s progress and the issues that have bubbled to the surface over the past six months. In 2024’s back half, look for renewed attention on breast screening as the FDA’s density reporting rules go into effect in September, and keep on the lookout for signs that real-world AI adoption is growing, even as AI developers look for consolidation opportunities.

Top 4 Trends from SIIM 2024

SIIM 2024 concluded this weekend, and what a meeting it was. The radiology industry’s premier imaging IT show returned to National Harbor, MD, for the first time since 2018, where the Biosphere-like environment of the Gaylord National Resort and Convention Center offered a respite from the muggy weather outside. 

SIIM is always a great place to check in on new imaging IT technologies like PACS, AI, and enterprise imaging, and hot topics at SIIM 2024 included…  

  • AI Needs to Get Real (World): Research studies showing AI’s value are fine, but developers need to show that AI works in real-world settings before wider adoption will occur. Fortunately that’s started with landmark studies published recently for use cases like breast and osteoporosis screening. Meanwhile, scuttlebutt on the SIIM 2024 exhibit floor reinforced that start-ups are navigating an ugly funding environment, and many industry observers are predicting a wave of AI consolidation. 
  • Outlook Clears for the Cloud: Cloud-based imaging has struggled to catch on for years, but that’s starting to change as healthcare providers warm to the concept of letting third parties oversee their patient data. And there are signs that imaging IT vendors that were quick to develop cloud-based versions of their PACS software are reaping the rewards.
  • Enterprise Imaging Grows Up: This year’s meeting marked the 10-year anniversary of enterprise imaging, as dated from the start of the SIIM-HIMSS collaboration in 2014. The anniversary is a milestone worth observing, but it also raises questions about what the next 10 years will look like, and how AI and data from other -ologies will be integrated into enterprise networks. 
  • Cybersecurity Takes Priority: Several high-profile cybersecurity breaches at healthcare vendors and providers in the last year highlight that not enough is being done to keep patient data secure. Will migrating to the cloud help? Only time will tell.

The Takeaway

SIIM’s collegiality and coziness has always been a selling point for the meeting, even back in the days when it was known as SCAR. This year didn’t disappoint, as deals got done and relationships were built at the Gaylord National.  

Be sure to visit our YouTube channel and LinkedIn page to view our video interviews from the floor of the meeting – it was great seeing you all at the show!

US Tomo for Dense Breasts

What’s the best way to provide supplemental imaging when screening women with dense breasts? A new study this week in Radiology offers support for a newer method, whole-breast ultrasound tomography. 

It’s well-known by now that dense breast tissue presents challenges to traditional X-ray-based mammography.

  • In fact, mammography screening’s mortality reduction is far lower in women with dense breasts compared to nondense breasts (13% vs. 41%). 

A variety of alternative technologies have been developed to provide supplemental imaging for women with dense breasts, from handheld ultrasound to breast MRI to molecular breast imaging. 

  • One supplemental technology is whole-breast tomography, developed by Delphinus Medical Technologies; the firm’s SoftVue 3D system was approved by the FDA in 2021 as an adjunct to full-field digital mammography for screening women with dense breast tissue. 

With SoftVue, women lie prone on a table with the breast stabilized in a water-filled chamber that provides coupling of sound energy between the breast and a ring transducer that scans the entire breast in 2-4 minutes.

  • Unlike handheld ultrasound, the scanner provides volumetric coronal images that provide a better view of the fat-glandular interface, where many cancers are located.

SoftVue’s performance was analyzed by researchers from USC and the University of Chicago in a retrospective study funded by Delphinus. 

  • They performed SoftVue scans along with digital mammography on 140 women with dense breast tissue from 2017 to 2019; 36 of the women were eventually diagnosed with cancer. 

In all, 32 readers interpreted the scans, comparing the performance of FFDM with ultrasound tomography to FFDM alone, finding … 

  • Better performance with FFDM + ultrasound tomography (AUC=0.60 vs. 0.54)
  • An increase in sensitivity in women with mammograms graded as BI-RADS 4 (suspicious), (37% vs. 30%) 
  • No statistically significant difference in sensitivity in BI-RADS 3 cases (probably benign), (40% vs. 33%, p=0.08)
  • A mean of 3.3 more true-positive and 0.9 false-negative findings per reader with ultrasound tomography, a net gain of 2.4

The Takeaway

The findings indicate that ultrasound tomography could become a new supplementary tool for imaging women with dense breasts. They are also a shot in the arm for Delphinus, which as a smaller vendor has the challenge of competing with large multinational OEMs that also offer technologies for supplemental breast screening. 

Better Prostate MRI Tools

In past issues of The Imaging Wire, we’ve discussed some of the challenges to prostate cancer screening that have limited its wider adoption. But researchers continue to develop new tools for prostate imaging – particularly with MRI – that could flip the script. 

Three new studies were published in just the last week focusing on prostate MRI, two involving AI image analysis.

In a new study in The Lancet Oncology, researchers presented results from AI algorithms developed for the Prostate Imaging—Cancer Artificial Intelligence (PI-CAI) Challenge.

  • PI-CAI pitted teams from around the world in a competition to develop the best prostate AI algorithms, with results presented at recent RSNA and ECR conferences. 

Researchers measured the ensemble performance of top-performing PI-CAI algorithms for detecting clinically significant prostate cancer against 62 radiologists who used the PI-RADS system in a population of 400 cases, finding that AI …

  • Had performance superior to radiologists (AUROC=0.91 vs. 0.86)
  • Generated 50% fewer false-positive results
  • Detected 20% fewer low-grade cases 

Broader use of prostate AI could reduce inter-reader variability and need for experienced radiologists to diagnose prostate cancer.

In the next study, in the Journal of Urology, researchers tested Avenda Health’s Unfold AI cancer mapping algorithm to measure the extent of tumors by analyzing their margins on MRI scans, finding that compared to physicians, AI … 

  • Had higher accuracy for defining tumor margins compared to two manual methods (85% vs. 67% and 76%)
  • Reduced underestimations of cancer extent with a significantly higher negative margin rate (73% vs. 1.6%)

AI wasn’t used in the final study, but this one could be the most important of the three due to its potential economic impact on prostate MRI.

  • Canadian researchers in Radiology tested a biparametric prostate MRI protocol that avoids the use of gadolinium contrast against multiparametric contrast-based MRI for guiding prostate biopsy. 

They compared the protocols in 1.5k patients with prostate lesions undergoing biopsy, finding…

  • No statistically significant difference in PPV between bpMRI and mpMRI for all prostate cancer (55% vs. 56%, p=0.61) 
  • No difference for clinically significant prostate cancer (34% vs. 34%, p=0.97). 

They concluded that bpMRI offers lower costs and could improve access to prostate MRI by making the scans easier to perform.

The Takeaway

The advances in AI and MRI protocols shown in the new studies could easily be applied to prostate cancer screening, making it more economical, accessible, and clinically effective.  

Headlines from SNMMI 2024

SNMMI 2024 wrapped up this week in Toronto, Canada, with the conference once again demonstrating the utility of nuclear medicine and molecular imaging for applications ranging from neurology to oncology to therapeutics. 

An annual SNMMI highlight is always the Image of the Year designation, and this year’s meeting didn’t disappoint. 

  • The honor went to a set of ultra-high-resolution brain PET images acquired with United Imaging’s NeuroEXPLORER (NX) scanner, a PET/CT system that the company developed with Yale and UC Davis and introduced last year for research use (although a clinical introduction could be forthcoming). 

The NX system sports a cylindrical design with a 52.4cm diameter and long axial field-of-view of 49.5cm; in the talk presented at SNMMI, researchers compared it to high-resolution research tomograph images with tracers targeting different dopamine receptors and transporters.

  • Researchers said the NX system had “exceptional” resolution in cortex and subcortical structures, with “low noise and exquisite resolution,” and predicted NX would “dramatically expand the scope of brain PET studies.”

Other important presentations at SNMMI included papers finding … 

  • An AI algorithm developed at Johns Hopkins detected six different types of cancer and automatically quantified tumor burden on whole-body PET/CT scans
  • In a study of 10.5k patients, AI that analyzed SPECT/CT images was able to predict all-cause mortality with an AUC of 0.77 by using CT attenuation correction scans to calculate risk factors like coronary artery calcium
  • Cognitive training is less effective in older adults who have beta-amyloid deposits in the brain on PET scans
  • An ultra-low-dose PET protocol presented by researchers from Bern University Hospital in Switzerland and Siemens Healthineers used deep learning reconstruction for a 50X reduction in PET radiation dose, to 0.15 mSv
  • A gallium-68 FAPI-based PET radiotracer was more accurate than fluorine-18 FDG for systemic staging of newly diagnosed breast cancer
  • A new chelating agent that binds radiometals to the parts of molecules that target cancer reduced off-target toxicity in PSMA radiopharmaceutical therapy
  • A combination of alpha- and beta-radionuclide therapy that combined actinium-225 with lutetium-177 worked well for colorectal cancer in a preclinical study
  • Research sponsored by Novartis on radioligand therapy for prostate cancer with lutetium-177 PSMA-617 (Pluvicto) was chosen as Abstract of the Year

The Takeaway

This year’s SNMMI presentations highlight the exciting advances taking place in nuclear medicine and molecular imaging, with the rise of theranostics giving the field an entirely new wrinkle that places it even closer to the center of precision medicine. Perhaps a new letter – T – will need to be added to the conference before too long.

Advances in AI-Automated Echocardiography with Us2.ai

Echocardiography is a pillar of cardiac imaging, but it is operator-dependent and time-consuming to perform. In this interview, The Imaging Wire spoke with Seth Koeppel, Head of Business Development, and José Rivero, MD, RCS, of echo AI developer Us2.ai about how the company’s new V2 software moves the field toward fully automated echocardiography. 

The Imaging Wire: Can you give a little bit of background about Us2.ai and its solutions for automated echocardiography? 

Seth Koeppel: Us2.ai is a company that originated in Singapore. The first version of the software (Us2.V1) received its FDA clearance a little over two years ago for an AI algorithm that automates the analysis and reporting on echocardiograms of 23 key measurements for the evaluation of diastolic and systolic function. 

In April 2024 we received an expanded regulatory clearance for more measurements – now a total of 45 measurements are cleared. When including derived measurements, based on those core 45 measurements, now up to almost 60 measurements are fully validated and automated, and with that Us2.V2 is bordering on full automation for echocardiography.

The application is vendor-agnostic – we basically can ingest any DICOM image and in two to three minutes produce a full report and analysis. 

The software replicates what the expert human does during the traditional 45-60 minutes of image acquisition and annotation in echocardiography. Typically, echocardiography involves acquiring images and video at 40 to 60 frames per second, resulting in some cases up to 100 individual images from a two- or three-second loop. 

The human expert then scrolls through these images to identify the best end-diastolic and end-systolic frames, manually annotating and measuring them, which is time-consuming and requires hundreds of mouse clicks. This process is very operator-dependent and manual.

And so the advantage the AI has is that it will do all of that in a fraction of the time, it will annotate every image of every frame, producing more data, and it does it with zero variability. 

The Imaging Wire: AI is being developed for a lot of different medical imaging applications, but it seems like it’s particularly important for echocardiography. Why would you say that is? 

José Rivero: It’s well known that healthcare institutions and providers are dealing with a larger number of patients and more complex cases. Echo is basically a pillar of cardiac imaging and really touches every patient throughout the path of care. We bring efficiency to the workflow and clinical support for diagnosis and treatment and follow-ups, directly contributing to enhanced patient care.

Additionally, the variability is a huge challenge in echo, as it is operator-dependent. Much of what we see in echo is subjective, certain patient populations require follow-up imaging, and for such longitudinal follow-up exams you want to remove the inter-operator variability as much as possible.

Seth Koeppel: Echo is ripe for disruption. We are faced with a huge shortage of cardiac sonographers. If you simply go on Indeed.com and you type in “cardiac sonographer,” there’s over 4,000 positions open today in the US. Most of those have somewhere between a $10,000, $15,000, up to $20,000 signing bonus. It is an acute problem.

We’re very quickly approaching a situation where we’re running huge backlogs – months in some situations – to get just a baseline echo. The gold standard for diagnosis is an echocardiogram. And if you can’t perform them, you have patients who are going by the wayside. 

In our current system today, the average tech will do about eight echoes a day. An echo takes 45 to 60 minutes, because it’s so manual and it relies on expert humans. For the past 35 years echo has looked the same, there has been no innovation, other than image quality has gotten better, but at same time more parameters were added, resulting in more things to analyze in that same 45 or 60 minutes. 

This is the first time that we can think about doing echo in less than 45 to 60 minutes, which is a huge enhancement in throughput because it addresses both that shortage of cardiac sonographers and the increasing demand for echo exams. 

It also represents a huge benefit to sonographers, who often suffer repetitive stress injuries due to the poor ergonomics of echo, holding the probe tightly pressed against the patient’s chest in one hand, and the other hand on the cart scrolling/clicking/measuring, etc., which results in a high incidence of repetitive stress injuries to neck, shoulder, wrists, etc. 

Studies have shown that 20-30% of techs leave the field due to work-related injury. If the AI can take on the role of making the majority of the measurements, in essence turning the sonographer into more of an “editor” than a “doer,” it has the potential to significantly reduce injury. 

Interestingly, we saw many facilities move to “off-cart” measurements during COVID to reduce the time the tech was exposed to the patient, and many realized the benefits and maintained this workflow, which we also see in pediatrics, as kids have a hard time lying on the table for 45 minutes. 

So with the introduction of AI in the echo workflow, the technicians acquire the images in 15/20 minutes and, in real-time, the images processed via the AI software are all automatically labeled, annotated, and measured. Within 2-3 minutes, a full report is available for the tech to review, adjust (our measures are fully editable) and confirm, and sign off on the report. 

You can immediately see the benefits of reducing the time the tech has the probe in their hand and the patient spends on the table, and the tech then gets to sit at an ergonomically correct workstation (proper keyboard, mouse, large monitors, chair, etc.) and do their reporting versus on-cart, which is where the injuries occur. 

It’s a worldwide shortage, it’s not just here in the US, we see this in other parts of the world, waitlist times to get an echo could be eight, 10, 12, or more months, which is just not acceptable.

The OPERA study in the UK demonstrated that the introduction of AI echo can tackle this issue. In Glasgow, the wait time for an echo was reduced from 12 months to under six weeks. 

The Imaging Wire: You just received clearance for V2, but your V1 has been in the clinical field for some time already. Can you tell us more about the feedback on the use of V1 by your customers.

José Rivero: Clinically, the focus of V1 was heart failure and pulmonary hypertension. This is a critical step, because with AI, we could rapidly identify patients with heart failure or pulmonary hypertension. 

One big step that has been taken by having the AI hand-in-hand with the mobile device is that you are taking echocardiography out of the hospital. So you can just go everywhere with this technology. 

We demonstrated the feasibility of new clinical pathways using AI echo out of the hospital, in clinics or primary care settings, including novice screening1, 2 (no previous experience in echocardiography but supported by point-of-care ultrasound including AI guidance and Us2.ai analysis and reporting).

Seth Koeppel: We’re addressing the efficiency problem. Most people are pegging the time savings for the tech on the overall echo somewhere around 15 to 20 minutes, which is significant. In a recent study done in Japan using the Us2.ai software by a cardiologist published in the Journal of Echocardiography, they had a 70% reduction in overall time for analysis and reporting.3 

The Imaging Wire: Let’s talk about version 2 of the software. When you started working on V2, what were some of the issues that you wanted to address with that?

Seth Koeppel: Version 1, version 2, it’s never changed for us, it’s about full automation of all echo. We aim to automate all the time-consuming and repetitive tasks the human has to do – image labeling and annotation, the clicks, measurements, and the analysis required.

Our medical affairs team works closely with the AI team and the feedback from our users to set the roadmap for the development of our software, prioritizing developments to meet clinical needs and expectations. In V2, we are now covering valve measurements and further enhancing our performance on HFpEF, as demonstrated now in comparison to the gold standard, pulmonary capillary wedge pressure (PCWP)4.

A new version is really about collaborating with leading institutions and researchers, acquiring excellent datasets for training the models until they reach a level of performance producing robust results we can all be confident in. Beyond the software development and training, we also engage in validation studies to further confirm the scientific efficiency of these models.

With V2 we’re also moving now into introducing different protocols, for example, contrast-enhanced imaging, which in the US is significant. We see in some clinics upwards of 50% to 60% use of contrast-enhanced imaging, where we don’t see that in other parts of the world. Our software is now validated for use with ultrasound-enhancing agents, and the measures correlate well.

Stress echo is another big application in echocardiography. So we’ve added that into the package now, and we’re starting to get into disease detection or disease prediction. 

As well as for cardiac amyloidosis (CA), V2 is aligned with guidelines-based measurements for identification of CA in patients, reporting such measurements when found, along with the actual guideline recommendations to support the identification of such conditions which could otherwise be missed 

José Rivero: We are at a point where we are now able to really go into more depth into the clinical environment, going into the echo lab itself, to where everything is done and where the higher volumes are. Before we had 23 measurements, now we are up to 45. 

And again, that can be even a screening tool. If we start thinking about even subdividing things that we do in echocardiography with AI, again, this is expanding to the mobile environment. So there’s a lot of different disease-based assessments that we do. We are now a more complete AI echocardiography assessment tool.

The Imaging Wire: Clinical guidelines are so important in cardiac imaging and in echocardiography. Us2.ai integrates and refers to guideline recommendations in its reporting. Can you talk about the importance of that, and how you incorporate this in the software?

José Rivero: Clinical guidelines play a crucial role in imaging for supporting standardized, evidence-based practice, as well as minimizing risks and improving quality for the diagnosis and treatment of patients. These are issued by experts, and adherence to guidelines is an important topic for quality of care and GDMT (guideline-directed medical therapies).

We are a scientifically driven company, so we recognize that international guidelines and recommendations are of utmost importance; hence, the guidelines indications are systematically visible and discrepant values found in measurements clearly highlighted.

Seth Koeppel: The beautiful thing about AI in echo is that echo is so structured that it just lends itself so perfectly to AI. If we can automate the measurements, and then we can run them through all the complicated matrices of guidelines, it’s just full automation, right? It’s the ability to produce a full echo report without any human intervention required, and to do it in a fraction of the time with zero variability and in full consideration for international recommendations.

José Rivero: This is another level of support we provide, the sonographer only has to focus on the image acquisition, the cardiologist doing the overreading and checking the data will have these references brought up to his/her attention

With echo you need to include every point in the workflow for the sonographer to really focus on image acquisition and the cardiologist to do the overreading and checking the data. But in the end, those two come together when the cardiologist and the sonographers realize that there’s efficiency on both ends. 

The Imaging Wire: V2 has only been out for a short time now but has there been research published on use of V2 in the field and what are clinicians finding?

Seth Koeppel: In V1, our software included a section labeled “investigational,” and some AI measurements were accessible for research purposes only as they had not yet received FDA clearance.

Opening access to these as investigational-research-only has enabled the users to test these out and confirm performance of the AI measurements in independently led publications and abstracts. This is why you are already seeing these studies out … and it is wonderful to see the interest of the users to publish on AI echo, a “trust and verify” approach.

With V2 and the FDA clearance, these measurements, our new features and functionalities, are available for clinical use. 

The Imaging Wire: What about the economics of echo AI?

Seth Koeppel: Reimbursement is still front and center in echo and people don’t realize how robust it is, partially due to it being so manual and time consuming. Hospital echo still reimburses nearly $500 under HOPPS (Hospital Outpatient Prospective Payment System). Where compared to a CT today you might get $140 global, MRI $300-$350, an echo still pays $500. 

When you think about the dynamic, it still relies on an expert human that makes typically $100,000 plus a year with benefits or more. And it takes 45 to 60 minutes. So the economics are such that the reimbursement is held very high. 

But imagine if you can do incrementally two or three more echoes per day with the assistance of AI, you can immediately see the ROI for this. If you can simply do two incremental echoes a day, and there’s 254 days in a working year, that’s an incremental 500 echoes. 

If there’s 2,080 hours in a year, and we average about an echo every hour, most places are producing about 2,000 echoes, now you’re taking them to 2,500 or more at $500, that’s an additional $100k per tech. Many hospitals have 8-10 techs scanning in any given day, so it’s a really compelling ROI. 

This is an AI that really has both a clinical benefit but also a huge ROI. There’s this whole debate out there about who pays for AI and how does it get paid for? This one’s a no brainer.

The Imaging Wire: If you could step back and take a holistic view of V2, what benefits do you think that your software has for patients as well as hospitals and healthcare systems?

Seth Koeppel: It goes back to just the inefficiencies of echo – you’re taking something that is highly manual, relies on expert humans that are in short supply. It’s as if you’re an expert craftsman, and you’ve been cutting by hand with a hand tool, and then somebody walks in and hands you a power tool. We still need the expert human, who knows where to cut, what to cut, how to cut. But now somebody has given him a tool that allows him to just do this job so much more efficiently, with a higher degree of accuracy. 

Let’s take another example. Strain is something that has been particularly difficult for operators because every vendor, every cart manufacturer, has their own proprietary strain. You can’t compare strain results done on a GE cart to a Philips cart to a Siemens cart. It takes time, you have to train the operators, you have human variability in there. 

In V2, strain is now included, it’s fully automated, and it’s vendor-neutral. You don’t have to buy expensive upgrades to carts to get access to it. So many, many problems are solved just in that one simple set of parameters. 

If we put it all together and look at the potential of AI echo, we can address the backlog, allow for more echo to be done in the echo lab but also in primary care settings and clinics where AI echo opens new pathways for screening and detection of heart failure and heart disease at an early stage, early detection for more efficient treatment.

This helps facilities facing the increasing demand for echo support and creates efficient longitudinal follow-up for oncology patients or populations at risk.

In addition, we can open access to echo exams in parts of the world which do not have the expensive carts nor the expert workforce available and deliver on our mission to democratize echocardiography.

José Rivero: I would say that V2 is a very strong release, which includes contrast, stress echo, and strain. I would love to see all three, including all whatever we had on V1, to be mainstream, and see the customer satisfaction with this because I think that it does bring a big solution to the echo world. 

The Imaging Wire: As the year progresses, what else can we look forward to seeing from Us2.ai?

José Rivero: In the clinical area, we will continue our work to expand the range of measurements and validate our detection models, but we are also very keen to start looking into pediatric echo.

Seth Koeppel: Our user interface has been greatly improved in V2 and this is something we really want to keep focus on. We are also working on refining our automated reporting to include customization features, perfecting the report output to further support the clinicians reviewing these, and integrating LLM models to make reporting accessible for non-experts HCP and the patients themselves. 

REFERENCES

  1. Tromp, J., Sarra, C., Bouchahda Nidhal, Ben Messaoud Mejdi, Fourat Zouari, Hummel, Y., Khadija Mzoughi, Sondes Kraiem, Wafa Fehri, Habib Gamra, Lam, C. S. P., Alexandre Mebazaa, & Faouzi Addad. (2023). Nurse-led home-based detection of cardiac dysfunction by ultrasound: Results of the CUMIN pilot study. European Heart Journal. Digital Health.
  2. Huang, W., Lee, A., Tromp, J., Loon Yee Teo, Chandramouli, C., Choon Ta Ng, Huang, F., Carolyn S.P. Lam, & See Hooi Ewe. (2023). Point-of-care AI-assisted echocardiography for screening of heart failure (HANES-HF). Journal of the American College of Cardiology, 81(8), 2145–2145. 
  3. Hirata, Y., Nomura, Y., Yoshihito Saijo, Sata, M., & Kusunose, K. (2024). Reducing echocardiographic examination time through routine use of fully automated software: a comparative study of measurement and report creation time. Journal of Echocardiography
  4. Hidenori Yaku, Komtebedde, J., Silvestry, F. E., & Sanjiv Jayendra Shah. (2024). Deep learning-based automated measurements of echocardiographic estimators invasive pulmonary capillary wedge pressure perform equally to core lab measurements: results from REDUCE LAP-HF II. Journal of the American College of Cardiology, 83(13), 316–316.
Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!