Multimodal NSCLC Treatment Prediction

Memorial Sloan Kettering researchers showed that data from routine diagnostic workups (imaging, pathology, genomics) could be used to predict how patients with non-small cell lung cancer (NSCLC) will respond to immunotherapy, potentially allowing more precise and effective treatment decisions.

Immunotherapy can significantly improve outcomes for patients with advanced NSCLC, and it has already “rapidly altered” the treatment landscape. 

  • However, only ~25% of advanced NSCLC patients respond to immunotherapy, and current biomarkers used to predict response have proved to be “only modestly helpful.”  

The researchers collected baseline diagnostic data from 247 patients with advanced NSCLC, including CTs, histopathology slides, and genomic sequencing. 

  • They then had domain experts curate and annotate this data, and leveraged a computational workflow to extract patient-level features (e.g. CT radiomics), before using their DyAM model to integrate the data and predict therapy response.

Using diagnostic data from the same 247 patients, the multimodal DyAM system predicted immunotherapy response with an 0.80 AUC. 

  • That’s far higher than the current FDA-cleared predictive biomarkers – tumor mutational burden and PD-L1 immunohistochemistry score (AUCs: 0.61 & 0.73) – and all imaging approaches examined in the study (AUCs: 0.62 to 0.64).

The Takeaway

Although MSK’s multimodal immunotherapy response research is still in its very early stages and would be difficult to clinically implement, this study “represents a proof of principle” that integrating diagnostic data that is already being captured could improve treatment predictions – and treatment outcomes.

This study also adds to the recent momentum we’re seeing with multi-modal diagnostics and treatment guidance, driven by efforts from academia and highly-funded AI startups like SOPHiA GENETICS and Owkin.

CADx’s Lung Nodule Impact

A new JACR study highlighted Computer-Aided Diagnosis (CADx) AI’s ability to improve lung nodule malignancy risk classifications, while stating a solid case for the technology’s potential clinical role.

The researchers applied RevealDx’s RevealAI-Lung CADx solution to chest CTs from 963 patients with 1,331 nodules (from 2 LC screening datasets, and one incidental nodule dataset), finding that RevealAI-Lung’s malignancy risk scores (mSI) combined with Lung-RADS would significantly improve…

  • Sensitivity versus Lung-RADS-only (3 cohorts: +25%, +68%, +117%)
  • Specificity versus Lung-RADS-only (3 cohorts: +17%, +18%, +33%)

Looking specifically at the study’s NLST cohort (704 nodules), mSI+Lung-RADS would have…

  • Reclassified 94 nodules to “high risk” (formerly false-negatives)
  • Potentially diagnosed 53 patients with malignant nodules at least one year earlier
  • Reclassified 36 benign nodules to “low-risk” (formerly false-positives)

The RevealDx-based malignancy scores also achieved comparable accuracy to existing clinical risk models when used independently (AUCs: 0.89 vs. 0.86 – 0.88).

The Takeaway

These results suggest that a CADx lung nodule solution like RevealAI-Lung could significantly improve lung nodule severity assessments. Considering the clinical importance of early and accurate diagnosis of high-risk nodules and the many negatives associated with improper diagnosis of low-risk nodules (costs, efficiency, procedures, patient burden), that could be a big deal.

Viz.ai Adds PE Stratification

Viz.ai announced the FDA clearance of its new RV/LV ratio algorithm, adding an important risk stratification feature to its pulmonary embolism AI module, while representing an interesting example of how triage AI solutions might evolve.

Triage + Stratification + Coordination Viz PE becomes far more comprehensive with its new RV/LV integration, helping radiologists detect/prioritize PE cases and assess right heart strain (a major cause of PE mortality), while equipping PE response teams with more actionable information. 

  • This addition might also improve clinicians’ experience with Viz PE, noting the risk of developing AI “alert fatigue” when all severity levels are treated the same.

Viz.ai is So On-Trend – Signify Research recently forecast that AI leaders will increasingly expand into new clinical segments, enhance their current solutions, and leverage platform / marketplace strategies, as AI evolves from point solutions to comprehensive workflows. Those trends are certainly evident within Viz.ai’s recent PE strategy…

  • Viz PE’s late 2021 launch was a key step in Viz.ai’s expansion beyond neuro/stroke
  • Adding RV/LV risk stratification certainly enhances Viz PE’s detection capabilities
  • Viz PE was developed by Avicenna.AI, arguably making Viz.ai a platform vendor
  • Viz PE’s workflow now combines detection, assessment, and care coordination

The same could be said for Aidoc, which previously added Imbio’s RV/LV algorithm to its PE AI solution (and also supports incidental PE), although few other triage AI workflows are this advanced for PE or other clinical areas.

The Takeaway

Viz.ai’s PE and RV/LV integration is a great example of how detection-focused AI tools can evolve through risk/severity stratification and workflow integration — and it might prove to be a key step in Viz.ai’s expansion beyond stroke AI.

Prioritizing Length of Stay

A new study out of Cedars Sinai provided what might be the strongest evidence yet that imaging AI triage and prioritization tools can shorten inpatient hospitalizations, potentially bolstering AI’s economic and patient care value propositions outside of the radiology department.

The researchers analyzed patient length of stay (LOS) before and after Cedars Sinai adopted Aidoc’s triage AI solutions for intracranial hemorrhage (Nov 2017) and pulmonary embolism (Dec 2018), using 2016-2019 data from all inpatients who received noncontrast head CTs or chest CTAs.

  • ICH Results – Among Cedars Sinai’s 1,718 ICH patients (795 after ICH AI adoption), average LOS dropped by 11.9% from 10.92 to 9.62 days (vs. -5% for other head CT patients).
  • PE Results – Among Cedars Sinai’s 400 patients diagnosed with PE (170 after PE AI adoption), average LOS dropped by a massive 26.3% from 7.91 to 5.83 days (vs. +5.2% for other CCTA patients). 
  • Control Results – Control group patients with hip fractures saw smaller LOS decreases during the respective post-AI periods (-3% & -8.3%), while hospital-wide LOS seemed to trend upward (-2.5% & +10%).

The Takeaway

These results were strong enough for the authors to conclude that Cedars Sinai’s LOS improvements were likely “due to the triage software implementation.” 

Perhaps more importantly, some could also interpret these LOS reductions as evidence that Cedars Sinai’s triage AI adoption also improved its overall patient care and inpatient operating costs, given how these LOS reductions were likely achieved (faster diagnosis & treatment), the typical associations between hospital long stays and negative outcomes, and the fact that inpatient stays have a significant impact on hospital costs.

AI Crosses the Chasm

Despite plenty of challenges, Signify Research forecasts that the global imaging AI market will nearly quadruple by 2026, as AI “crosses the chasm” towards widespread adoption. Here’s how Signify sees that transition happening:

Market Growth – After generating global revenues of around $375M in 2020 and $400M and 2021, Signify expects the imaging AI market to maintain a massive 27.6% CAGR through 2026 when it reaches nearly $1.4B. 

Product-Led Growth – This growth will be partially driven by the availability of new and more-effective AI products, following:

  • An influx of new regulatory-approved solutions
  • Continued improvements to current products (e.g. adding triage to detection tools)
  • AI leaders expanding into new clinical segments
  • AI’s evolution from point solutions to comprehensive solutions/workflows
  • The continued adoption AI platforms/marketplaces

The Big Four – Imaging AI’s top four clinical segments (breast, cardiology, neurology, pulmonology) represented 87% of the AI market in 2021, and those segments will continue to dominate through 2026. 

VC Support – After investing $3.47B in AI startups between 2015 and 2021, Signify expects that VCs will remain a market growth driver, while their funding continues to shift toward later stage rounds. 

Remaining Barriers – AI still faces plenty of barriers, including limited reimbursements, insufficient economic/ROI evidence, stricter regulatory standards (especially in EU), and uncertain future prioritization from healthcare providers and imaging IT vendors. 

The Takeaway

2022 has been a tumultuous year for AI, bringing a number of notable achievements (increased adoption, improving products, new reimbursements, more clinical evidence, big funding rounds) that sometimes seemed to be overshadowed by AI’s challenges (difficult funding climate, market consolidation, slower adoption than previously hoped).  

However, Signify’s latest research suggests that 2022’s ups-and-downs might prove to be part of AI’s path towards mainstream adoption. And based on the steeper growth Signify forecasts for 2025-2026 (see chart above), the imaging AI market’s growth rate and overall value should become far greater after it finally “crosses the chasm.”

Exo Acquires Medo AI

Exo took a big step towards making its handheld ultrasounds easier to use and adopt, acquiring AI startup Medo AI. Although unexpected, this is a logical and potentially significant acquisition that deserves a deeper look…

Exo plans to integrate Medo’s Sweep AI technology into its ultrasound platform, forecasting that this hardware-software combination will streamline Exo POCUS adoption among clinicians who lack ultrasound training/experience. 

  • Medo’s automated image acquisition and interpretation software has clearance for two exams (thyroid nodule assessments, developmental hip dysplasia screening), and it has more AI modules in development. 

Exo didn’t disclose acquisition costs, but Medo AI is relatively modest in size (23 employees on LinkedIn, no public info on VC rounds) and it’s unclear if it had any other bidders.

  • Either way, Exo can probably afford it following its $220M Series C in July 2021 (total funding now >$320m), especially considering that Medo’s use case directly supports Exo’s core strategy of expanding POCUS to more clinicians.

Some might point out how this acquisition continues 2022’s AI shakeup, which brought three other AI acquisitions (Aidence & Quantib by RadNet; Nines by Sirona) and at least two strategic pivots (MaxQ AI & Kheiron). 

  • That said, this is the first AI acquisition by a hardware vendor and it doesn’t represent the type of segment consolidation that everyone keeps forecasting.

Exo’s Medo acquisition does introduce a potential shift in the way handheld ultrasound vendors might approach expanding their AI software stack, after historically focusing on a mix of partnerships and in-house development. 

The Takeaway

Handheld ultrasound is perhaps the only medical imaging product segment that includes an even mix of the industry’s largest OEMs and extremely well-funded startups, setting the stage for fierce competition. 

That competition is even stronger when you consider that the handheld ultrasound segment’s primary market (point-of-care clinicians) is still early in its adoption curve, which places a big target on any products that could make handheld ultrasounds easier to use and adopt (like Medo AI).

Echo AI COVID Predictions

A new JASE study showed that AI-based echocardiography measurements can be used to predict COVID patient mortality, but manual measurements performed by echo experts can’t. This could be seen as yet another “AI beats humans” study (or yet another COVID AI study), but it also gives important evidence of AI’s potential to reduce echo measurement variability.

Starting with transthoracic echocardiograms from 870 hospitalized COVID patients (13 hospitals, 9 countries, 27.4% who later died), the researchers utilized Ultromics’ EchoGo Core AI solution and a team of expert readers to measure left ventricular ejection fraction (LVEF) and LV longitudinal strain (LVLS). They then analyzed the measurements and applied them to mortality prediction models, finding that the AI-based measurements:

  • Were “significant predictors” of patient mortality (LVEF: OR=0.974, p=0.003; LVLS: OR=1.060, p=0.004), while the manual measurements couldn’t be used to predict mortality
  • Had significantly less variability than the experts’ manual measurements
  • Were similarly “feasible” as manual measurements when applied to the various echo exams
  • Showed stronger correlations with other COVID biomarkers (e.g. diastolic blood pressure)
  • Combined with other biomarkers to produce even more accurate mortality predictions

The authors didn’t seem too surprised that AI measurements had less variability, or by their conclusion that reducing measurement variability “consequently increased the statistical power to predict mortality.”

They also found that sonographers’ original scanning inconsistency was responsible for nearly half of the experts’ measurement variability, suggesting that a combination of echo guidance AI software (e.g. Caption or UltraSight) with echo reporting AI tools (e.g. Us2.ai or Ultromics) could “further reduce variability.”

The Takeaway

Echo AI measurements aren’t about to become a go-to COVID mortality biomarker (clinical factors and comorbidities are much stronger predictors), but this study makes a strong case for echo AI’s measurement consistency advantage. It’s also a reminder that reducing variability improves overall accuracy, which would be valuable for sophisticated prediction models or everyday echocardiography operations.

Annalise.ai’s Pneumothorax Performance

A new Mass General Brigham study highlighted Annalise.ai’s pneumothorax detection solution’s strong diagnostic performance, including across different pneumothorax types and clinical scenarios.

The researchers used Annalise Enterprise CXR Triage Pneumothorax to “analyze” 985 CXRs (435 positive), detecting simple and tension pneumothorax cases with high accuracy:

  • Simple pneumothorax – 0.979 AUC (94.3% sensitivity, 92.0% specificity)
  • Tension pneumothorax – 0.987 AUC (94.5% sensitivity, 95.3% specificity)

The study also suggests that Annalise Enterprise CXR should maintain this strong performance when used outside of Mass General, as it surpassed standard accuracy benchmarks (>0.95 AUC, >80% sensitivity & specificity) across nearly all of the study’s clinical scenarios (CXR manufacturer, CXR projection type, patient sex/age/positioning). 

The Takeaway

The clinical benefits of early pneumothorax detection are clear, so studies like this are good news for the growing number of FDA-approved pneumothorax AI vendors who are working on clinical adoption. 

However, this study feels like even better news for Annalise.ai, noting that it is one of the few pneumothorax AI vendors that detects both simple and tension pneumothorax, and considering that Annalise Enterprise CXR is capable of detecting 122 other CXR indications (even if it’s currently only FDA-cleared for pneumothorax).

The Case for Pancreatic Cancer Radiomics

Mayo Clinic researchers added to the growing field of evidence suggesting that CT radiomics can be used to detect signs of pancreatic ductal adenocarcinoma (PDAC) well before they are visible to radiologists, potentially allowing much earlier and more effective surgical interventions.

The researchers first extracted pancreatic cancer’s radiomics features using pre-diagnostic CTs from 155 patients who were later diagnosed with PDAC and 265 CTs from healthy patients. The pre-diagnostic CTs were performed for unrelated reasons a median of 398 days before cancer diagnosis.

They then trained and tested four different radiomics-based machine learning models using the same internal dataset (training: 292 CTs; testing: 128 CTs), with the top model identifying future pancreatic cancer patients with promising results:

  • AUC – 0.98
  • Accuracy – 92.2%
  • Sensitivity – 95.5%
  • Specificity – 90.3% 

Interestingly, the same ML model had even better specificity in follow-up tests using an independent internal dataset (n= 176; 92.6%) and an external NIH dataset (n= 80; 96.2%).

Mayo Clinic’s ML radiomics approach also significantly outperformed two radiologists, who achieved “only fair” inter-reader agreement (Cohen’s kappa 0.3) and produced far lower AUCs (rads’ 0.66 vs. ML’s 0.95 – 0.98). That’s understandable, given that these early pancreatic cancer “imaging signatures” aren’t visible to humans.

The Takeaway

Although radiomics-based pancreatic cancer detection is still immature, this and other recent studies certainly support its potential to detect early-stage pancreatic cancer while it’s treatable. 
That evidence should grow even more conclusive in the future, noting that members of this same Mayo Clinic team are operating a 12,500-patient prospective/randomized trial exploring CT-based pancreatic cancer screening.

Cathay’s AI Underwriting

Cathay Life Insurance will use Lunit’s INSIGHT CXR AI solution to identify abnormalities in its applicants’ chest X-rays, potentially modernizing a manual underwriting process and uncovering a new non-clinical market for AI vendors.

Lunit INSIGHT CXR will be integrated into Cathay’s underwriting workflow, with the goals of enhancing its radiologists’ accuracy and efficiency, while improving Cathay’s underwriting decisions. 

Lunit and Cathay have reason to be optimistic about this endeavor, given that their initial proof of concept study found that INSIGHT CXR:

  • Improved Cathay’s radiologists’ reading accuracy by 20%
  • Reduced the radiologists’ overall reading time by up to 90%

Those improvements could have a significant labor impact, considering that Cathay’s rads review 30,000 CXRs every year. They might have an even greater business impact, noting the important role that underwriting accuracy has on policy profitability.

Lunit’s part of the announcement largely focused on its expansion beyond clinical settings, revealing plans to “become the driving force of digital innovation in the global insurance market” and to further expand its business into “various sectors outside the hospital setting.”

The Takeaway

Even if life insurers only require CXRs for a small percentage of their applicants (older people, higher value policies), they still review hundreds of thousands of CXRs each year. That makes insurers an intriguing new market segment for AI vendors, and makes you wonder what other non-clinical AI use cases might exist. However, it might also make radiologists who are still skeptical about AI concerned.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!