Advances in AI-Automated Echocardiography with Us2.ai

Echocardiography is a pillar of cardiac imaging, but it is operator-dependent and time-consuming to perform. In this interview, The Imaging Wire spoke with Seth Koeppel, Head of Business Development, and José Rivero, MD, RCS, of echo AI developer Us2.ai about how the company’s new V2 software moves the field toward fully automated echocardiography. 

The Imaging Wire: Can you give a little bit of background about Us2.ai and its solutions for automated echocardiography? 

Seth Koeppel: Us2.ai is a company that originated in Singapore. The first version of the software (Us2.V1) received its FDA clearance a little over two years ago for an AI algorithm that automates the analysis and reporting on echocardiograms of 23 key measurements for the evaluation of diastolic and systolic function. 

In April 2024 we received an expanded regulatory clearance for more measurements – now a total of 45 measurements are cleared. When including derived measurements, based on those core 45 measurements, now up to almost 60 measurements are fully validated and automated, and with that Us2.V2 is bordering on full automation for echocardiography.

The application is vendor-agnostic – we basically can ingest any DICOM image and in two to three minutes produce a full report and analysis. 

The software replicates what the expert human does during the traditional 45-60 minutes of image acquisition and annotation in echocardiography. Typically, echocardiography involves acquiring images and video at 40 to 60 frames per second, resulting in some cases up to 100 individual images from a two- or three-second loop. 

The human expert then scrolls through these images to identify the best end-diastolic and end-systolic frames, manually annotating and measuring them, which is time-consuming and requires hundreds of mouse clicks. This process is very operator-dependent and manual.

And so the advantage the AI has is that it will do all of that in a fraction of the time, it will annotate every image of every frame, producing more data, and it does it with zero variability. 

The Imaging Wire: AI is being developed for a lot of different medical imaging applications, but it seems like it’s particularly important for echocardiography. Why would you say that is? 

José Rivero: It’s well known that healthcare institutions and providers are dealing with a larger number of patients and more complex cases. Echo is basically a pillar of cardiac imaging and really touches every patient throughout the path of care. We bring efficiency to the workflow and clinical support for diagnosis and treatment and follow-ups, directly contributing to enhanced patient care.

Additionally, the variability is a huge challenge in echo, as it is operator-dependent. Much of what we see in echo is subjective, certain patient populations require follow-up imaging, and for such longitudinal follow-up exams you want to remove the inter-operator variability as much as possible.

Seth Koeppel: Echo is ripe for disruption. We are faced with a huge shortage of cardiac sonographers. If you simply go on Indeed.com and you type in “cardiac sonographer,” there’s over 4,000 positions open today in the US. Most of those have somewhere between a $10,000, $15,000, up to $20,000 signing bonus. It is an acute problem.

We’re very quickly approaching a situation where we’re running huge backlogs – months in some situations – to get just a baseline echo. The gold standard for diagnosis is an echocardiogram. And if you can’t perform them, you have patients who are going by the wayside. 

In our current system today, the average tech will do about eight echoes a day. An echo takes 45 to 60 minutes, because it’s so manual and it relies on expert humans. For the past 35 years echo has looked the same, there has been no innovation, other than image quality has gotten better, but at same time more parameters were added, resulting in more things to analyze in that same 45 or 60 minutes. 

This is the first time that we can think about doing echo in less than 45 to 60 minutes, which is a huge enhancement in throughput because it addresses both that shortage of cardiac sonographers and the increasing demand for echo exams. 

It also represents a huge benefit to sonographers, who often suffer repetitive stress injuries due to the poor ergonomics of echo, holding the probe tightly pressed against the patient’s chest in one hand, and the other hand on the cart scrolling/clicking/measuring, etc., which results in a high incidence of repetitive stress injuries to neck, shoulder, wrists, etc. 

Studies have shown that 20-30% of techs leave the field due to work-related injury. If the AI can take on the role of making the majority of the measurements, in essence turning the sonographer into more of an “editor” than a “doer,” it has the potential to significantly reduce injury. 

Interestingly, we saw many facilities move to “off-cart” measurements during COVID to reduce the time the tech was exposed to the patient, and many realized the benefits and maintained this workflow, which we also see in pediatrics, as kids have a hard time lying on the table for 45 minutes. 

So with the introduction of AI in the echo workflow, the technicians acquire the images in 15/20 minutes and, in real-time, the images processed via the AI software are all automatically labeled, annotated, and measured. Within 2-3 minutes, a full report is available for the tech to review, adjust (our measures are fully editable) and confirm, and sign off on the report. 

You can immediately see the benefits of reducing the time the tech has the probe in their hand and the patient spends on the table, and the tech then gets to sit at an ergonomically correct workstation (proper keyboard, mouse, large monitors, chair, etc.) and do their reporting versus on-cart, which is where the injuries occur. 

It’s a worldwide shortage, it’s not just here in the US, we see this in other parts of the world, waitlist times to get an echo could be eight, 10, 12, or more months, which is just not acceptable.

The OPERA study in the UK demonstrated that the introduction of AI echo can tackle this issue. In Glasgow, the wait time for an echo was reduced from 12 months to under six weeks. 

The Imaging Wire: You just received clearance for V2, but your V1 has been in the clinical field for some time already. Can you tell us more about the feedback on the use of V1 by your customers.

José Rivero: Clinically, the focus of V1 was heart failure and pulmonary hypertension. This is a critical step, because with AI, we could rapidly identify patients with heart failure or pulmonary hypertension. 

One big step that has been taken by having the AI hand-in-hand with the mobile device is that you are taking echocardiography out of the hospital. So you can just go everywhere with this technology. 

We demonstrated the feasibility of new clinical pathways using AI echo out of the hospital, in clinics or primary care settings, including novice screening1, 2 (no previous experience in echocardiography but supported by point-of-care ultrasound including AI guidance and Us2.ai analysis and reporting).

Seth Koeppel: We’re addressing the efficiency problem. Most people are pegging the time savings for the tech on the overall echo somewhere around 15 to 20 minutes, which is significant. In a recent study done in Japan using the Us2.ai software by a cardiologist published in the Journal of Echocardiography, they had a 70% reduction in overall time for analysis and reporting.3 

The Imaging Wire: Let’s talk about version 2 of the software. When you started working on V2, what were some of the issues that you wanted to address with that?

Seth Koeppel: Version 1, version 2, it’s never changed for us, it’s about full automation of all echo. We aim to automate all the time-consuming and repetitive tasks the human has to do – image labeling and annotation, the clicks, measurements, and the analysis required.

Our medical affairs team works closely with the AI team and the feedback from our users to set the roadmap for the development of our software, prioritizing developments to meet clinical needs and expectations. In V2, we are now covering valve measurements and further enhancing our performance on HFpEF, as demonstrated now in comparison to the gold standard, pulmonary capillary wedge pressure (PCWP)4.

A new version is really about collaborating with leading institutions and researchers, acquiring excellent datasets for training the models until they reach a level of performance producing robust results we can all be confident in. Beyond the software development and training, we also engage in validation studies to further confirm the scientific efficiency of these models.

With V2 we’re also moving now into introducing different protocols, for example, contrast-enhanced imaging, which in the US is significant. We see in some clinics upwards of 50% to 60% use of contrast-enhanced imaging, where we don’t see that in other parts of the world. Our software is now validated for use with ultrasound-enhancing agents, and the measures correlate well.

Stress echo is another big application in echocardiography. So we’ve added that into the package now, and we’re starting to get into disease detection or disease prediction. 

As well as for cardiac amyloidosis (CA), V2 is aligned with guidelines-based measurements for identification of CA in patients, reporting such measurements when found, along with the actual guideline recommendations to support the identification of such conditions which could otherwise be missed 

José Rivero: We are at a point where we are now able to really go into more depth into the clinical environment, going into the echo lab itself, to where everything is done and where the higher volumes are. Before we had 23 measurements, now we are up to 45. 

And again, that can be even a screening tool. If we start thinking about even subdividing things that we do in echocardiography with AI, again, this is expanding to the mobile environment. So there’s a lot of different disease-based assessments that we do. We are now a more complete AI echocardiography assessment tool.

The Imaging Wire: Clinical guidelines are so important in cardiac imaging and in echocardiography. Us2.ai integrates and refers to guideline recommendations in its reporting. Can you talk about the importance of that, and how you incorporate this in the software?

José Rivero: Clinical guidelines play a crucial role in imaging for supporting standardized, evidence-based practice, as well as minimizing risks and improving quality for the diagnosis and treatment of patients. These are issued by experts, and adherence to guidelines is an important topic for quality of care and GDMT (guideline-directed medical therapies).

We are a scientifically driven company, so we recognize that international guidelines and recommendations are of utmost importance; hence, the guidelines indications are systematically visible and discrepant values found in measurements clearly highlighted.

Seth Koeppel: The beautiful thing about AI in echo is that echo is so structured that it just lends itself so perfectly to AI. If we can automate the measurements, and then we can run them through all the complicated matrices of guidelines, it’s just full automation, right? It’s the ability to produce a full echo report without any human intervention required, and to do it in a fraction of the time with zero variability and in full consideration for international recommendations.

José Rivero: This is another level of support we provide, the sonographer only has to focus on the image acquisition, the cardiologist doing the overreading and checking the data will have these references brought up to his/her attention

With echo you need to include every point in the workflow for the sonographer to really focus on image acquisition and the cardiologist to do the overreading and checking the data. But in the end, those two come together when the cardiologist and the sonographers realize that there’s efficiency on both ends. 

The Imaging Wire: V2 has only been out for a short time now but has there been research published on use of V2 in the field and what are clinicians finding?

Seth Koeppel: In V1, our software included a section labeled “investigational,” and some AI measurements were accessible for research purposes only as they had not yet received FDA clearance.

Opening access to these as investigational-research-only has enabled the users to test these out and confirm performance of the AI measurements in independently led publications and abstracts. This is why you are already seeing these studies out … and it is wonderful to see the interest of the users to publish on AI echo, a “trust and verify” approach.

With V2 and the FDA clearance, these measurements, our new features and functionalities, are available for clinical use. 

The Imaging Wire: What about the economics of echo AI?

Seth Koeppel: Reimbursement is still front and center in echo and people don’t realize how robust it is, partially due to it being so manual and time consuming. Hospital echo still reimburses nearly $500 under HOPPS (Hospital Outpatient Prospective Payment System). Where compared to a CT today you might get $140 global, MRI $300-$350, an echo still pays $500. 

When you think about the dynamic, it still relies on an expert human that makes typically $100,000 plus a year with benefits or more. And it takes 45 to 60 minutes. So the economics are such that the reimbursement is held very high. 

But imagine if you can do incrementally two or three more echoes per day with the assistance of AI, you can immediately see the ROI for this. If you can simply do two incremental echoes a day, and there’s 254 days in a working year, that’s an incremental 500 echoes. 

If there’s 2,080 hours in a year, and we average about an echo every hour, most places are producing about 2,000 echoes, now you’re taking them to 2,500 or more at $500, that’s an additional $100k per tech. Many hospitals have 8-10 techs scanning in any given day, so it’s a really compelling ROI. 

This is an AI that really has both a clinical benefit but also a huge ROI. There’s this whole debate out there about who pays for AI and how does it get paid for? This one’s a no brainer.

The Imaging Wire: If you could step back and take a holistic view of V2, what benefits do you think that your software has for patients as well as hospitals and healthcare systems?

Seth Koeppel: It goes back to just the inefficiencies of echo – you’re taking something that is highly manual, relies on expert humans that are in short supply. It’s as if you’re an expert craftsman, and you’ve been cutting by hand with a hand tool, and then somebody walks in and hands you a power tool. We still need the expert human, who knows where to cut, what to cut, how to cut. But now somebody has given him a tool that allows him to just do this job so much more efficiently, with a higher degree of accuracy. 

Let’s take another example. Strain is something that has been particularly difficult for operators because every vendor, every cart manufacturer, has their own proprietary strain. You can’t compare strain results done on a GE cart to a Philips cart to a Siemens cart. It takes time, you have to train the operators, you have human variability in there. 

In V2, strain is now included, it’s fully automated, and it’s vendor-neutral. You don’t have to buy expensive upgrades to carts to get access to it. So many, many problems are solved just in that one simple set of parameters. 

If we put it all together and look at the potential of AI echo, we can address the backlog, allow for more echo to be done in the echo lab but also in primary care settings and clinics where AI echo opens new pathways for screening and detection of heart failure and heart disease at an early stage, early detection for more efficient treatment.

This helps facilities facing the increasing demand for echo support and creates efficient longitudinal follow-up for oncology patients or populations at risk.

In addition, we can open access to echo exams in parts of the world which do not have the expensive carts nor the expert workforce available and deliver on our mission to democratize echocardiography.

José Rivero: I would say that V2 is a very strong release, which includes contrast, stress echo, and strain. I would love to see all three, including all whatever we had on V1, to be mainstream, and see the customer satisfaction with this because I think that it does bring a big solution to the echo world. 

The Imaging Wire: As the year progresses, what else can we look forward to seeing from Us2.ai?

José Rivero: In the clinical area, we will continue our work to expand the range of measurements and validate our detection models, but we are also very keen to start looking into pediatric echo.

Seth Koeppel: Our user interface has been greatly improved in V2 and this is something we really want to keep focus on. We are also working on refining our automated reporting to include customization features, perfecting the report output to further support the clinicians reviewing these, and integrating LLM models to make reporting accessible for non-experts HCP and the patients themselves. 

REFERENCES

  1. Tromp, J., Sarra, C., Bouchahda Nidhal, Ben Messaoud Mejdi, Fourat Zouari, Hummel, Y., Khadija Mzoughi, Sondes Kraiem, Wafa Fehri, Habib Gamra, Lam, C. S. P., Alexandre Mebazaa, & Faouzi Addad. (2023). Nurse-led home-based detection of cardiac dysfunction by ultrasound: Results of the CUMIN pilot study. European Heart Journal. Digital Health.
  2. Huang, W., Lee, A., Tromp, J., Loon Yee Teo, Chandramouli, C., Choon Ta Ng, Huang, F., Carolyn S.P. Lam, & See Hooi Ewe. (2023). Point-of-care AI-assisted echocardiography for screening of heart failure (HANES-HF). Journal of the American College of Cardiology, 81(8), 2145–2145. 
  3. Hirata, Y., Nomura, Y., Yoshihito Saijo, Sata, M., & Kusunose, K. (2024). Reducing echocardiographic examination time through routine use of fully automated software: a comparative study of measurement and report creation time. Journal of Echocardiography
  4. Hidenori Yaku, Komtebedde, J., Silvestry, F. E., & Sanjiv Jayendra Shah. (2024). Deep learning-based automated measurements of echocardiographic estimators invasive pulmonary capillary wedge pressure perform equally to core lab measurements: results from REDUCE LAP-HF II. Journal of the American College of Cardiology, 83(13), 316–316.

Accessing Quality Data for AI Training

One of the biggest roadblocks in medical AI development is the lack of high-quality, diverse data for these technologies to train on.

What Is the Issue with Data Access?

Artificial Intelligence (AI) has emerged as a game-changer in the realm of medical imaging, with immense potential to revolutionize clinical practices. AI-powered medical imaging can efficiently identify intricate patterns within data and provide quantitative assessments of disease biomarkers. This technology not only enhances the accuracy of diagnosis but can also significantly speed up the diagnostic process, ultimately improving patient outcomes.

While the landscape is promising, medical innovators grapple with challenges in accessing high-quality, diverse, and timely data, which is vital for training AI and driving progress.

A 2019 study from the Massachusetts Institute of Technology found that over half of medical AI studies predominantly relied on databases from high-income countries, particularly the United States and China. If models trained on homogenous data are used clinically in diverse populations, then it could pose a risk to patients and worsen health inequalities experienced by underrepresented groups. In the United States, If the Food and Drug Administration deems these risks to be too high, then they could even reject a product’s application for approval. 

In trying to get hold of the best training data, AI developers, particularly startups and individual researchers, face a web of complexities, including legal, ethical, and technical considerations. Issues like data privacy, security, interoperability, and data quality compound these challenges, all of which are crucial in the effective and responsible utilization of healthcare data.

One company working to overcome these hurdles in hope of accelerated and high-quality innovations is Gradient Health.

Gradient Health’s Approach

Gradient Health offers AI developers instant access to one of the world’s largest libraries of anonymized medical images, sourced from hundreds of global hospitals, clinics, and research centers. This data is meticulously de-identified for compliance and can be tailored by vendors to suit their project’s needs and exported in machine learning-ready DICOM + JSON formats.

By partnering with Gradient Health, innovators can use these extensive, diverse datasets to train and validate their AI algorithms, mitigating bias in medical AI and advancing the development of precise, high-quality medical solutions.

Gaining access to top-tier data at the outset of the development process promises long-term benefits. Here’s how:

  • Expand Market Presence: Access the latest cross-vendor datasets to develop medical innovations, expanding your market share.
  • Global Expansion: Enter new regions swiftly with locally sourced data from your target markets, accelerating your global reach.
  • Competitive Edge: Obtain on-demand training data for imaging modalities and disease areas, facilitating product portfolio expansion.
  • Speed to Market: Quickly acquire data for product training and validation, reducing sourcing time and expediting regulatory clearances for faster patient delivery.

“After looking for a data provider for many weeks, I was not able to get even a sample delivery within one month. I was immensely glad to work with Gradient and go from first contact to final delivery within one week!” said Julien Schmidt, chief operations officer and co-founder at Mango Medical.

The Outlook

In recent years, medical AI has experienced significant growth. Innovations in medical imaging in particular have played a pivotal role in enabling healthcare professionals to identify diseases earlier and more accurately in patients with a range of conditions. 

Gradient Health offers a data-compliant, intuitive platform for AI developers, facilitating access to the essential data required to train these critical technologies. This approach holds the potential to save time, resources, and, most importantly, lives. 

More information about Gradient Health is available on the company’s website. They will also be exhibiting at RSNA 2023 in booth #5149 in the South Hall.

Autonomous AI for Medical Imaging is Here. Should We Embrace It?

What is autonomous artificial intelligence, and is radiology ready for this new technology? In this paper, we explore one of the most exciting autonomous AI applications, ChestLink from Oxipit. 

What is Autonomous AI? 

Up to now, most interpretive AI solutions have focused on assisting radiologists with analyzing medical images. In this scenario, AI provides suggestions to radiologists and alerts them to suspicious areas, but the final diagnosis is the physician’s responsibility.

Autonomous AI flips the script by having AI run independently of the radiologist, such as by analyzing a large batch of chest X-ray exams for tuberculosis to screen out those certain to be normal. This can significantly reduce the primary care workload, where healthcare providers who offer preventive health checkups may see up to 80% of chest X-rays with no abnormalities. 

Autonomous AI frees the radiologist to focus on cases with suspicious pathology – with the potential of delivering a more accurate diagnosis to patients in real need.

One of the first of this new breed of autonomous AI is ChestLink from Oxipit. The solution received the CE Mark in March 2022, and more than a year later it is still the only AI application capable of autonomous performance. 

How ChestLink Works

ChestLink produces final chest X-ray reports on healthy patients with no involvement from human radiologists. The application only reports autonomously on chest X-ray studies where it is highly confident that the image does not include abnormalities. These studies are automatically removed from the reporting workflow. 

ChestLink enables radiologists to report on studies most likely to have abnormalities. In current clinical deployments, ChestLink automates 10-30% of all chest X-ray workflow. The exact percentage depends on the type of medical institution, with primary care facilities having the most potential for automation.

ChestLink Clinical Validation

ChestLink was trained on a dataset with over 500k images. In clinical validation studies, ChestLink consistently performed at 99%+ sensitivity.

A recent study published in Radiology highlighted the sensitivity of the application.

“The most surprising finding was just how sensitive this AI tool was for all kinds of chest disease. In fact, we could not find a single chest X-ray in our database where the algorithm made a major mistake. Furthermore, the AI tool had a sensitivity overall better than the clinical board-certified radiologists,” said study co-author Louis Lind Plesner, MD, from the Department of Radiology at the Herlev and Gentofte Hospital in Copenhagen, Denmark.

In this study ChestLink autonomously reported on 28% of all normal studies.

In another study at the Oulu University Hospital in Finland, researchers concluded that AI could reliably remove 36.4% of normal chest X-rays from the reporting workflow with a minimal number of false negatives, leading to effectively no compromise on patient safety. 

Safe Path to AI Autonomy

Oxipit ChestLink is currently used in healthcare facilities in the Netherlands, Finland, Lithuania, and other European countries, and is in the trial phase for deployment in one of the leading hospitals in England.

ChestLink follows a three-stage framework for clinical deployment.

  • Retrospective analysis. ChestLink analyzes a couple of years worth (100k+) of historic chest x-ray studies at the medical institution. In this analysis the product is validated on real-world data. It also realistically estimates what fraction of reporting scope can be automated.
  • Semi-autonomous operations. The application moves into prospective settings, analyzing images in near-real time. ChestLink produces preliminary reports for healthy patients, which may then be approved by a certified clinician.
  • Autonomous operations. The application autonomously reports on high-confidence healthy patient studies. The application performance is monitored in real-time with analytical tools.

Are We There Yet?

ChestLink aims to address the shortage of clinical radiologists worldwide, which has led to a substantial decline in care quality.

In the UK, the NHS currently faces a massive 33% shortfall in its radiology workforce. Nearly 71% of clinical directors of UK radiology departments feel that they do not have a sufficient number of radiologists to deliver safe and effective patient care.

ChestLink offers a safe pathway into autonomous operations by automating a significant and somewhat mundane portion of radiologist workflow without any negative effects for patient care. 

So should we embrace autonomous AI? The real question should be, can we afford not to? 

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!