We hear a lot about AI’s potential to expand ultrasound to far more users and clinical settings, and a new study out of Singapore suggests that ultrasound’s AI-driven expansion might go far beyond what many of us had in mind.
The PANES-HF trial set up a home-based echo heart failure screening program that equipped a team of complete novices (no experience with echo, or in healthcare) with EchoNous’s AI-guided handheld ultrasound system and Us2.ai’s AI-automated echo analysis and reporting solution.
After just two weeks of training, the novices performed at-home echocardiography exams on 100 patients with suspected heart failure, completing the studies in an average of 11.5 minutes per patient.
When compared to the same 100 patients’ NT-proBNP blood test results and reference standard echo exams (expert sonographers, cart-based echo systems, and cardiologist interpretations), the novice echo AI pathway…
- Yielded interpretable results in 96 patients
- Improved risk prediction accuracy versus NT-proBNP by 30%
- Detected abnormal LVEF <50% scans with an 0.880 AUC (vs. NT-proBNP’s 0.651-0.690 AUCs)
- Achieved good agreement with expert clinicians for LVEF<50% detection (k=0.742)
These findings were strong enough for the authors to suggest that emerging ultrasound and AI technologies will enable healthcare organizations to create completely new heart failure pathways. That might start with task-shifting from cardiologists to primary care, but could extend to novice-performed exams and home-based care.
Considering the rising prevalence of heart failure, the recent advances in HF treatments, and the continued sonographer shortage, there’s clearly a need for more accessible and efficient echo pathways — and this study is arguably the strongest evidence that AI might be at the center of those new pathways.
GE HealthCare took a major step towards expanding its ultrasound systems to new users and settings, acquiring AI guidance startup Caption Health.
GE plans to integrate Caption’s AI guidance technology into its ultrasound platform, starting with POCUS devices and echocardiography exams. GE specifically emphasized how its Caption integration will help streamline echo adoption among novice operators and bring heart failure exams into “doctors’ offices, the home, and alternate sites of care.”
- That’s particularly notable given healthcare’s major shift outside of hospital walls, especially considering that Caption has already developed a unique home echo exam and virtual diagnosis service.
- It’s also another sign that GE sees big potential for at-home ultrasound, coming less than a year after investing in home maternity ultrasound startup Pulsenmore.
GE didn’t disclose the tuck-in acquisition’s value. However, Caption is relatively large for an AI startup (79 employees on LinkedIn, >$62M raised) and is arguably the most established company in the ultrasound guidance segment (FDA & CE approved, CMS-reimbursed, notable alliances).
- The fact that GE HealthCare has already made two acquisitions since spinning off in early January (after a 16 month pause) also suggests that the newly-independent medtech giant has returned to M&A mode.
Of course, the acquisition is another sign that the imaging AI consolidation trend remains in full swing, marking at least the ninth AI startup acquisition since January 2022 and the third so far in 2023.
- One contributor to that AI consolidation surge appears to be ultrasound hardware vendors acquiring AI guidance companies, noting that GE’s Caption acquisition comes about six months after Exo’s acquisition of Medo AI.
Ultrasound’s potential expansion to new users and clinical settings could create the kind of growth that most modalities only experience once in their lifetime (or never experience), and ease of use might dictate how far ultrasound is able to expand. That could make this acquisition particularly significant for GE HealthCare and for ultrasound’s path towards far broader adoption.
A Cedars-Sinai-led team developed an echocardiography AI model that was able to accurately assess coronary artery calcium buildup, potentially revealing a safer, more economical, and more accessible approach to CAC scoring.
The researchers used 1,635 Cedars-Sinai patients’ transthoracic echocardiogram (TTE) videos paired with their CT-based Agatston CAC scores to train an AI model to predict patients’ CAC scores based on their PLAX view TTE videos.
When tested against Cedars-Sinai TTEs that weren’t used for AI training, the TTE CAC AI model detected…
- Zero CAC patients with “high discriminatory abilities” (AUC: 0.81)
- Intermediate patients “modestly well” (≥200 scores; AUC: 0.75)
- High CAC patients “modestly well” (≥400 scores; AUC: 0.74)
When validated against 92 TTEs from an external Stanford dataset, the AI model similarly predicted which patients had zero and high CAC scores (AUCs: 0.75 & 0.85).
More importantly, the TTE AI CAC scores accurately predicted the patients’ future risks. TTE CAC scores predicted one-year mortality similarly to CT CAC scores, and they even improved overall prediction of low-risk patients by downgrading patients who had high CT CAC scores and zero TTE CAC scores.
CT-based CAC scoring is widely accepted, but it isn’t accessible to many patients, and concerns about its safety and value (cost, radiation, incidentals) have kept the USPSTF from formally recommending it for coronary artery disease surveillance. We’d need a lot more research and AI development efforts, but if TTE CAC AI solutions like this prove to be reliable, it could make CAC scoring far more accessible and potentially even more accepted.
A team of Australian researchers developed an echo AI solution that accurately assesses patients’ aortic stenosis (AS) severity levels, including many patients with severe AS who might go undetected using current methods.
The researchers trained their AI-Decision Support Algorithm (AI-DSA) using the Australian Echo Database, which features more than 1M echo exams from over 630k patients, and includes the patients’ 5-year mortality outcomes.
Using 179k echo exams from the same Australian Echo Database, the researchers found that AI-DSA detected…
- Moderate-to-severe AS in 2,606 patients, who had a 56.2% five-year mortality rate
- Severe AS in 4,622 patients, who had a 67.9% five-year mortality rate
Those mortality rates are far higher than the study’s remaining 171,826 patients (22.9% 5yr rate), giving the individuals that AI-DSA classified with moderate-to-severe or severe AS significantly higher odds of dying within five years (Adjusted odds ratios: 1.82 & 2.80).
AI-DSA also served as a valuable complement to current methods, as 33% of the patients that AI-DSA identified with severe AS would not have been detected using the current echo assessment guidelines. However, severe AS patients who were only flagged by the AI-DSA algorithm had similar 5-year mortality rates as patients who were flagged by both AI-DSA and the current guidelines (64.4% vs. 69.1%).
There’s been a lot of promising echo AI research lately, but most studies have highlighted the technology’s performance in comparison to sonographers. This new study suggests that echo AI might also help identify high-risk AS patients who wouldn’t be detected by sonographers (at least if they are using current methods), potentially steering more patients towards life-saving aortic valve replacement procedures.
A new JASE study showed that AI-based echocardiography measurements can be used to predict COVID patient mortality, but manual measurements performed by echo experts can’t. This could be seen as yet another “AI beats humans” study (or yet another COVID AI study), but it also gives important evidence of AI’s potential to reduce echo measurement variability.
Starting with transthoracic echocardiograms from 870 hospitalized COVID patients (13 hospitals, 9 countries, 27.4% who later died), the researchers utilized Ultromics’ EchoGo Core AI solution and a team of expert readers to measure left ventricular ejection fraction (LVEF) and LV longitudinal strain (LVLS). They then analyzed the measurements and applied them to mortality prediction models, finding that the AI-based measurements:
- Were “significant predictors” of patient mortality (LVEF: OR=0.974, p=0.003; LVLS: OR=1.060, p=0.004), while the manual measurements couldn’t be used to predict mortality
- Had significantly less variability than the experts’ manual measurements
- Were similarly “feasible” as manual measurements when applied to the various echo exams
- Showed stronger correlations with other COVID biomarkers (e.g. diastolic blood pressure)
- Combined with other biomarkers to produce even more accurate mortality predictions
The authors didn’t seem too surprised that AI measurements had less variability, or by their conclusion that reducing measurement variability “consequently increased the statistical power to predict mortality.”
They also found that sonographers’ original scanning inconsistency was responsible for nearly half of the experts’ measurement variability, suggesting that a combination of echo guidance AI software (e.g. Caption or UltraSight) with echo reporting AI tools (e.g. Us2.ai or Ultromics) could “further reduce variability.”
Echo AI measurements aren’t about to become a go-to COVID mortality biomarker (clinical factors and comorbidities are much stronger predictors), but this study makes a strong case for echo AI’s measurement consistency advantage. It’s also a reminder that reducing variability improves overall accuracy, which would be valuable for sophisticated prediction models or everyday echocardiography operations.
A new JACC study showed that Ultromics’ EchoGo Pro AI solution can accurately classify stress echocardiograms, while improving clinician performance with a particularly challenging and operator-dependent exam.
The researchers used EchoGo Pro to independently analyze 154 stress echo studies, leveraging the solution’s 31 image features to identify patients with severe coronary artery disease with a 0.927 AUC (84.4% sensitivity; 92.7% specificity).
EchoGo Pro maintained similar performance with a version of the test dataset that excluded the 38 patients with known coronary artery disease or resting wall motion abnormalities (90.5% sensitivity; 88.4% specificity).
The researchers then had four physicians with different levels of stress echo experience analyze the same 154 studies with and without AI support, finding that the EchoGo Pro reports:
- Improved the readers’ average AUC – 0.877 vs. 0.931
- Increased their mean sensitivity – 85% vs. 95%
- Didn’t hurt their specificity – 83.6% vs. 85%
- Increased their number of confident reads – 440 vs. 483
- Reduced their number of non-confident reads – 152 vs. 109
- Improved their diagnostic agreement rates – 0.68-0.79 vs. 0.83-0.97
Ultromics’ stress echo reports improved the physicians’ interpretation accuracy, confidence, and reproducibility, without increasing false positives. That list of improvements satisfies most of the requirements clinicians have for AI (in addition to speed/efficiency), and it represents another solid example of echo AI’s real-world potential.