Echo AI Detects More Aortic Stenosis

A team of Australian researchers developed an echo AI solution that accurately assesses patients’ aortic stenosis (AS) severity levels, including many patients with severe AS who might go undetected using current methods.

The researchers trained their AI-Decision Support Algorithm (AI-DSA) using the Australian Echo Database, which features more than 1M echo exams from over 630k patients, and includes the patients’ 5-year mortality outcomes.

Using 179k echo exams from the same Australian Echo Database, the researchers found that AI-DSA detected…

  • Moderate-to-severe AS in 2,606 patients, who had a 56.2% five-year mortality rate
  • Severe AS in 4,622 patients, who had a 67.9% five-year mortality rate

Those mortality rates are far higher than the study’s remaining 171,826 patients (22.9% 5yr rate), giving the individuals that AI-DSA classified with moderate-to-severe or severe AS significantly higher odds of dying within five years (Adjusted odds ratios: 1.82 & 2.80).

AI-DSA also served as a valuable complement to current methods, as 33% of the patients that AI-DSA identified with severe AS would not have been detected using the current echo assessment guidelines. However, severe AS patients who were only flagged by the AI-DSA algorithm had similar 5-year mortality rates as patients who were flagged by both AI-DSA and the current guidelines (64.4% vs. 69.1%).

Takeaway

There’s been a lot of promising echo AI research lately, but most studies have highlighted the technology’s performance in comparison to sonographers. This new study suggests that echo AI might also help identify high-risk AS patients who wouldn’t be detected by sonographers (at least if they are using current methods), potentially steering more patients towards life-saving aortic valve replacement procedures.

Detecting the Radiographically Occult

A new study published in European Heart Journal – Digital Health suggests that AI can detect aortic stenosis (AS) in chest X-rays, which would be a major breakthrough if confirmed, but will be met with plenty of skepticism until then.

The Models – The Japan-based research team trained/validated/tested three DL models using 10,433 CXRs from 5,638 patients (all from the same institution), using echocardiography assessments to label each image as AS-positive or AS-negative.

The Results – The best performing model detected AS-positive patients with an 0.83 AUC, while achieving 83% sensitivity, 69% specificity, 71% accuracy, and a 97% negative predictive value (but… a 23% PPV). Given the widespread use and availability of CXRs, these results were good enough for the authors to suggest that their DL model could be a valuable way to detect aortic stenosis.

The Response – The folks on radiology/AI Twitter found these results “hard to believe,” given that human rads can’t detect aortic stenosis in CXRs with much better accuracy than a coin flip, and considering that these models were only trained/validated/tested with internal data. The conversation also revealed a growing level of AI study fatigue that will likely become worse if journals don’t start enforcing higher research standards (e.g. external validation, mentioning confounding factors, addressing the 23% PPV, maybe adding an editorial).

The Takeaway – Twitter’s MDs and PhDs love to critique study methodology, but this thread was a particularly helpful reminder of what potential AI users are looking for in AI studies — especially studies that claim AI can detect a condition that’s barely detectable by human experts.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!