Exo Acquires Medo AI

Exo took a big step towards making its handheld ultrasounds easier to use and adopt, acquiring AI startup Medo AI. Although unexpected, this is a logical and potentially significant acquisition that deserves a deeper look…

Exo plans to integrate Medo’s Sweep AI technology into its ultrasound platform, forecasting that this hardware-software combination will streamline Exo POCUS adoption among clinicians who lack ultrasound training/experience. 

  • Medo’s automated image acquisition and interpretation software has clearance for two exams (thyroid nodule assessments, developmental hip dysplasia screening), and it has more AI modules in development. 

Exo didn’t disclose acquisition costs, but Medo AI is relatively modest in size (23 employees on LinkedIn, no public info on VC rounds) and it’s unclear if it had any other bidders.

  • Either way, Exo can probably afford it following its $220M Series C in July 2021 (total funding now >$320m), especially considering that Medo’s use case directly supports Exo’s core strategy of expanding POCUS to more clinicians.

Some might point out how this acquisition continues 2022’s AI shakeup, which brought three other AI acquisitions (Aidence & Quantib by RadNet; Nines by Sirona) and at least two strategic pivots (MaxQ AI & Kheiron). 

  • That said, this is the first AI acquisition by a hardware vendor and it doesn’t represent the type of segment consolidation that everyone keeps forecasting.

Exo’s Medo acquisition does introduce a potential shift in the way handheld ultrasound vendors might approach expanding their AI software stack, after historically focusing on a mix of partnerships and in-house development. 

The Takeaway

Handheld ultrasound is perhaps the only medical imaging product segment that includes an even mix of the industry’s largest OEMs and extremely well-funded startups, setting the stage for fierce competition. 

That competition is even stronger when you consider that the handheld ultrasound segment’s primary market (point-of-care clinicians) is still early in its adoption curve, which places a big target on any products that could make handheld ultrasounds easier to use and adopt (like Medo AI).

Echo AI COVID Predictions

A new JASE study showed that AI-based echocardiography measurements can be used to predict COVID patient mortality, but manual measurements performed by echo experts can’t. This could be seen as yet another “AI beats humans” study (or yet another COVID AI study), but it also gives important evidence of AI’s potential to reduce echo measurement variability.

Starting with transthoracic echocardiograms from 870 hospitalized COVID patients (13 hospitals, 9 countries, 27.4% who later died), the researchers utilized Ultromics’ EchoGo Core AI solution and a team of expert readers to measure left ventricular ejection fraction (LVEF) and LV longitudinal strain (LVLS). They then analyzed the measurements and applied them to mortality prediction models, finding that the AI-based measurements:

  • Were “significant predictors” of patient mortality (LVEF: OR=0.974, p=0.003; LVLS: OR=1.060, p=0.004), while the manual measurements couldn’t be used to predict mortality
  • Had significantly less variability than the experts’ manual measurements
  • Were similarly “feasible” as manual measurements when applied to the various echo exams
  • Showed stronger correlations with other COVID biomarkers (e.g. diastolic blood pressure)
  • Combined with other biomarkers to produce even more accurate mortality predictions

The authors didn’t seem too surprised that AI measurements had less variability, or by their conclusion that reducing measurement variability “consequently increased the statistical power to predict mortality.”

They also found that sonographers’ original scanning inconsistency was responsible for nearly half of the experts’ measurement variability, suggesting that a combination of echo guidance AI software (e.g. Caption or UltraSight) with echo reporting AI tools (e.g. Us2.ai or Ultromics) could “further reduce variability.”

The Takeaway

Echo AI measurements aren’t about to become a go-to COVID mortality biomarker (clinical factors and comorbidities are much stronger predictors), but this study makes a strong case for echo AI’s measurement consistency advantage. It’s also a reminder that reducing variability improves overall accuracy, which would be valuable for sophisticated prediction models or everyday echocardiography operations.

Annalise.ai’s Pneumothorax Performance

A new Mass General Brigham study highlighted Annalise.ai’s pneumothorax detection solution’s strong diagnostic performance, including across different pneumothorax types and clinical scenarios.

The researchers used Annalise Enterprise CXR Triage Pneumothorax to “analyze” 985 CXRs (435 positive), detecting simple and tension pneumothorax cases with high accuracy:

  • Simple pneumothorax – 0.979 AUC (94.3% sensitivity, 92.0% specificity)
  • Tension pneumothorax – 0.987 AUC (94.5% sensitivity, 95.3% specificity)

The study also suggests that Annalise Enterprise CXR should maintain this strong performance when used outside of Mass General, as it surpassed standard accuracy benchmarks (>0.95 AUC, >80% sensitivity & specificity) across nearly all of the study’s clinical scenarios (CXR manufacturer, CXR projection type, patient sex/age/positioning). 

The Takeaway

The clinical benefits of early pneumothorax detection are clear, so studies like this are good news for the growing number of FDA-approved pneumothorax AI vendors who are working on clinical adoption. 

However, this study feels like even better news for Annalise.ai, noting that it is one of the few pneumothorax AI vendors that detects both simple and tension pneumothorax, and considering that Annalise Enterprise CXR is capable of detecting 122 other CXR indications (even if it’s currently only FDA-cleared for pneumothorax).

The Case for Pancreatic Cancer Radiomics

Mayo Clinic researchers added to the growing field of evidence suggesting that CT radiomics can be used to detect signs of pancreatic ductal adenocarcinoma (PDAC) well before they are visible to radiologists, potentially allowing much earlier and more effective surgical interventions.

The researchers first extracted pancreatic cancer’s radiomics features using pre-diagnostic CTs from 155 patients who were later diagnosed with PDAC and 265 CTs from healthy patients. The pre-diagnostic CTs were performed for unrelated reasons a median of 398 days before cancer diagnosis.

They then trained and tested four different radiomics-based machine learning models using the same internal dataset (training: 292 CTs; testing: 128 CTs), with the top model identifying future pancreatic cancer patients with promising results:

  • AUC – 0.98
  • Accuracy – 92.2%
  • Sensitivity – 95.5%
  • Specificity – 90.3% 

Interestingly, the same ML model had even better specificity in follow-up tests using an independent internal dataset (n= 176; 92.6%) and an external NIH dataset (n= 80; 96.2%).

Mayo Clinic’s ML radiomics approach also significantly outperformed two radiologists, who achieved “only fair” inter-reader agreement (Cohen’s kappa 0.3) and produced far lower AUCs (rads’ 0.66 vs. ML’s 0.95 – 0.98). That’s understandable, given that these early pancreatic cancer “imaging signatures” aren’t visible to humans.

The Takeaway

Although radiomics-based pancreatic cancer detection is still immature, this and other recent studies certainly support its potential to detect early-stage pancreatic cancer while it’s treatable. 
That evidence should grow even more conclusive in the future, noting that members of this same Mayo Clinic team are operating a 12,500-patient prospective/randomized trial exploring CT-based pancreatic cancer screening.

Cathay’s AI Underwriting

Cathay Life Insurance will use Lunit’s INSIGHT CXR AI solution to identify abnormalities in its applicants’ chest X-rays, potentially modernizing a manual underwriting process and uncovering a new non-clinical market for AI vendors.

Lunit INSIGHT CXR will be integrated into Cathay’s underwriting workflow, with the goals of enhancing its radiologists’ accuracy and efficiency, while improving Cathay’s underwriting decisions. 

Lunit and Cathay have reason to be optimistic about this endeavor, given that their initial proof of concept study found that INSIGHT CXR:

  • Improved Cathay’s radiologists’ reading accuracy by 20%
  • Reduced the radiologists’ overall reading time by up to 90%

Those improvements could have a significant labor impact, considering that Cathay’s rads review 30,000 CXRs every year. They might have an even greater business impact, noting the important role that underwriting accuracy has on policy profitability.

Lunit’s part of the announcement largely focused on its expansion beyond clinical settings, revealing plans to “become the driving force of digital innovation in the global insurance market” and to further expand its business into “various sectors outside the hospital setting.”

The Takeaway

Even if life insurers only require CXRs for a small percentage of their applicants (older people, higher value policies), they still review hundreds of thousands of CXRs each year. That makes insurers an intriguing new market segment for AI vendors, and makes you wonder what other non-clinical AI use cases might exist. However, it might also make radiologists who are still skeptical about AI concerned.

Optellum’s NTAPC

Optellum joined the small group of imaging AI vendors who are on a path towards reimbursements, representing a major milestone for Optellum and another sign of progress for the business of imaging AI.

With Optellum’s “New Technology Ambulatory Payment Classification” (NTAPC), providers who use the Optellum Lung Cancer Prediction solution with Medicare patients can bill CMS $600-$700 for each use (CPT: 0721T).

Physicians would use Optellum LCP to analyze a Medicare patient’s CT scan, leveraging Optellum’s pulmonary nodule risk scores to support their decision whether to refer the patient to a pulmonologist. Then they would bill CMS for reimbursement.

However, like previous NTAPCs, this is just the first step in Optellum’s path towards full reimbursement coverage:

  • Regional Medicare Administrative Contractors will initially decide whether to reimburse on a case-by-case basis (and can decline reimbursements)
  • A similar process will happen with private plans
  • Reimbursements would only be nationally required once Optellum LCP is covered by each of the 12 MAC geographies and all commercial payors

Although not guaranteed, Optellum’s CMS-defined reimbursement rates/process represents a solid first step, especially considering that Perspectum and HeartFlow’s previous NTAPCs led to widespread coverage.

Optellum’s NTAPC also continues imaging AI’s overall progress towards reimbursements. Within the last two years, Viz.ai and Caption Health scored the first AI NTAPs (guaranteed add-on payments, but temporary) and startups like Nanox AI, Koios, and Perspectum landed AI’s first CPT III codes (reimbursements not guaranteed, but data collected for future reimbursement decisions). 

The Takeaway
Although reimbursements are still elusive for most AI vendors and not even guaranteed for most AI products that already have billing codes, it’s clear that we’re seeing more progress towards AI reimbursements. That’s good news for AI vendors, since it’s pretty much proven that reimbursements drive AI adoption and are necessary to show ROI for many AI products.

AI Experiences & Expectations

The European Society of Radiology just published new insights into how imaging AI is being used across Europe and how the region’s radiologists view this emerging technology.

The Survey – The ESR reached out to 27,700 European radiologists in January 2022 with a survey regarding their experiences and perspectives on imaging AI, receiving responses from just 690 rads.

Early Adopters – 276 the 690 respondents (40%) had clinical experience using imaging AI, with the majority of these AI users:

  • Working at academic and regional hospitals (52% & 37% – only 11% at practices)
  • Leveraging AI for interpretation support, case prioritization, and post-processing (51.5%, 40%, 28.6%)

AI Experiences – The radiologists who do use AI revealed a mix of positive and negative experiences:

  • Most found diagnostic AI’s output reliable (75.7%)
  • Few experienced technical difficulties integrating AI into their workflow (17.8%)
  • The majority found AI prioritization tools to be “very helpful” or “moderately helpful” for reducing staff workload (23.4% & 62.2%)
  • However, far fewer reported that diagnostic AI tools reduced staff workload (22.7% Yes, 69.8% No)

Adoption Barriers – Most coverage of this study will likely focus on the fact that only 92 of the surveyed rads (13.3%) plan to acquire AI in the future, while 363 don’t intend to acquire AI (52.6%). The radiologists who don’t plan to adopt AI (including those who’ve never used AI) based their opinions on:

  • AI’s lack of added value (44.4%)
  • AI not performing as well as advertised (26.4%)
  • AI adding too much work (22.9%)
  • And “no reason” (6.3%)

US Context – These results are in the same ballpark as the ACR’s 2020 US-based survey (33.5% using AI, only 20% of non-users planned to adopt within 5 years), although 2020 feels like a long time ago.

The Takeaway

Even if this ESR survey might leave you asking more questions (What about AI’s impact on patient care? How often is AI actually being used? How do opinions differ between AI users and non-users?), more than anything it confirms what many of us already know… We’re still very early in AI’s evolution, and there’s still plenty of performance and perception barriers that AI has to overcome.

Chest CT AI Efficiency

A new AJR study out of the Medical University of South Carolina showed that Siemens Healthineers’ AI-RAD Companion Chest CT solution significantly reduced radiologists’ interpretation times. Considering that radiologist efficiency is often sacrificed in order to achieve AI’s accuracy and prioritization benefits, this study is worth a deeper look.

MUSC integrated Siemens’ AI-RAD Companion Chest CT into their PACS workflow, providing its radiologists with automated image analysis, quantification, visualization, and results for several key chest CT exams.

Three cardiothoracic radiologists were randomly assigned chest CT exams from 390 patients (195 w/ AI support), finding that the average AI-supported interpretations were significantly faster. . .

  • For the combined readers – 328 vs. 421 seconds 
  • For each individual radiologist – 289 vs. 344; 449 vs. 649; 281 vs. 348 seconds
  • For contrast-enhanced scans – 20% faster
  • For non-contrast scans – 24.2% faster
  • For negative scans – 26.4% faster
  • For positive scans without significant new findings – 25.7% faster
  • For positive scans with significant new findings – 20.4% faster

Overall, the solution allowed a 22.1% average reduction in radiologist interpretation times, or an hour per typical workday.

The authors didn’t explore the solution’s impact on radiologist accuracy, noting that AI accuracy has already been covered in plenty of previous studies. In fact, members of this same MUSC research team previously showed that AI-RAD Companion Chest CT identified abnormalities more accurately than many of its radiologists.

The Takeaway

Out of the hundreds of AI studies we see each year, very few have tried to measure efficiency gains and even fewer have shown that AI actually reduces radiologist interpretation times.
Given the massive exam volumes that radiologists are facing and the crucial role efficiency plays in AI ROI calculations, these results are particularly encouraging, and suggest that AI can indeed improve both accuracy and efficiency.

Burdenless Incidental AI

A team of IBM Watson Health researchers developed an interesting image and text-based AI system that could significantly improve incidental lung nodule detection, without being “overly burdensome” for radiologists. That seems like a clinical and workflow win-win for any incidental AI system, and makes this study worth a deeper look.

Watson Health’s R&D-stage AI system automatically detects potential lung nodules in chest and abdominal CTs, and then analyzes the text in corresponding radiology reports to confirm whether they mention lung nodules. In clinical practice, the system would flag exams with potentially missed nodules for radiologist review.

The researchers used the AI system to analyze 32k CTs sourced from three health systems in the US and UK. They then had radiologists review the 415 studies that the AI system flagged for potentially missed pulmonary nodules, finding that it:

  • Caught 100 exams containing at least one missed nodule
  • Flagged 315 exams that didn’t feature nodules (false positives)
  • Achieved a 24% overall positive predictive value
  • Produced just a 1% false positive rate

The AI system’s combined ability to detect missed pulmonology nodules while “minimizing” radiologists’ re-reading labor was enough to make the authors optimistic about this type of AI. They specifically suggested that it could be a valuable addition to Quality Assurance programs, improving patient care while avoiding the healthcare and litigation costs that can come from missed findings.

The Takeaway

Watson Health’s new AI system adds to incidental AI’s growing momentum, joining a number of research and clinical-stage solutions that emerged in the last two years. However, this system’s ability to cross-reference radiology report text and apparent ability to minimize false positives are relatively unique. 

Even if most incidental AI tools aren’t ready for everyday clinical use, and their potential to increase re-read labor might be alarming to some rads, these solutions’ ability to catch earlier stage diseases and minimize the impact of diagnostic “misses” could earn the attention of a wide range of healthcare stakeholders going forward.

Autonomous & Ultrafast Breast MRI

A new study out of the University of Groningen highlighted the scanning and diagnostic efficiency advantages that might come from combining ultrafast breast MRI with autonomous AI. That might make some readers uncomfortable, but the fact that autonomous AI is one of 2022’s most controversial topics makes this study worth some extra attention.

The researchers used 837 “TWIST” ultrafast breast MRI exams from 488 patients (118 abnormal breasts, 34 w/ malignant lesions) to train and validate a deep learning model to detect and automatically exclude normal exams from radiologist workloads. They then tested it against 178 exams from 149 patients from the same institution (55 abnormal, 30 w/ malignant lesions), achieving a 0.81 AUC.

When evaluated at a conservative 0.25 detection error threshold, the DL model:

  • Achieved 98% sensitivity and negative predictive values
  • Misclassified one abnormal exam as normal (out of 55)
  • Correctly classified all exams with malignant lesions
  • Would have reduced radiologists’ exam workload by 6.2% (-15.7% at breast level)

When evaluated at a 0.37 detection error threshold, the model:

  • Achieved 95% sensitivity and a 97% negative predictive value (still high)
  • Misclassified three abnormal exams (3 of 55), including one malignant lesion
  • Would have reduced radiologists’ exam workload by 15.7% (-30.6% at breast level)

These radiologist workflow improvements would complement the TWIST ultrafast MRI sequence’s far shorter magnet time than current protocols (2 vs. 20 minutes), while the DL model could further reduce scan times by automatically ending exams once they are flagged as normal. 

The Takeaway

Even if the world might not be ready for this type of autonomous AI workflow, this study is a good example of how abbreviated MRI protocols and AI could be able to improve both imaging team and radiologist efficiency. It’s also the latest in a series of studies exploring how AI could exclude normal scans from radiologist workflows, suggesting that the development and design of this type of autonomous AI will continue to mature.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!