iCAD and Solis CVD Alliance

iCAD and major breast imaging center company Solis Mammography announced plans to develop and commercialize AI that quantifies breast arterial calcifications (BACs) in mammograms to identify women with high cardiovascular disease (CVD) risks.

Through the multi-year alliance, iCAD and Solis will expand upon iCAD’s flagship ProFound AI solution’s ability to detect and quantify BACs, with the goal of helping radiologists identify women with high CVD risks and guide them into care.

iCAD and Solis’ expansion into cardiovascular disease screening wasn’t exactly expected, but recent trends certainly suggest that commercial AI-based BAC detection could be on the way: 

  • There’s also mounting academic and commercial momentum behind using AI to “opportunistically” screen for incidental findings in scans that were performed for other reasons (e.g. analyzing CTs for CAC scores, osteoporosis, or lung nodules).
  • Despite being the leading cause of death in the US, it appears that we’re a long way from formal heart disease screening programs, making the already-established mammography screening pathway an unlikely alternative.
  • Volpara and Microsoft are also working on a mammography AI product that detects and quantifies BACs. In other words, three of the biggest companies in breast imaging (at least) and one of the biggest tech companies in the world are all currently developing AI-based BAC screening solutions.

The Takeaway

Widespread adoption of mammography AI-based cardiovascular disease screening might seem like a longshot to many readers who often view incidentals as a burden and have grown weary of early-stage AI announcements… and they might be right. That said, there’s plenty of evidence suggesting that a solution like this would help detect more early-stage heart disease using scans that are already being performed.

Prioritizing Length of Stay

A new study out of Cedars Sinai provided what might be the strongest evidence yet that imaging AI triage and prioritization tools can shorten inpatient hospitalizations, potentially bolstering AI’s economic and patient care value propositions outside of the radiology department.

The researchers analyzed patient length of stay (LOS) before and after Cedars Sinai adopted Aidoc’s triage AI solutions for intracranial hemorrhage (Nov 2017) and pulmonary embolism (Dec 2018), using 2016-2019 data from all inpatients who received noncontrast head CTs or chest CTAs.

  • ICH Results – Among Cedars Sinai’s 1,718 ICH patients (795 after ICH AI adoption), average LOS dropped by 11.9% from 10.92 to 9.62 days (vs. -5% for other head CT patients).
  • PE Results – Among Cedars Sinai’s 400 patients diagnosed with PE (170 after PE AI adoption), average LOS dropped by a massive 26.3% from 7.91 to 5.83 days (vs. +5.2% for other CCTA patients). 
  • Control Results – Control group patients with hip fractures saw smaller LOS decreases during the respective post-AI periods (-3% & -8.3%), while hospital-wide LOS seemed to trend upward (-2.5% & +10%).

The Takeaway

These results were strong enough for the authors to conclude that Cedars Sinai’s LOS improvements were likely “due to the triage software implementation.” 

Perhaps more importantly, some could also interpret these LOS reductions as evidence that Cedars Sinai’s triage AI adoption also improved its overall patient care and inpatient operating costs, given how these LOS reductions were likely achieved (faster diagnosis & treatment), the typical associations between hospital long stays and negative outcomes, and the fact that inpatient stays have a significant impact on hospital costs.

Prostate MR AI’s Experience Boost

A new European Radiology study showed that Siemens Healthineers’ AI-RAD Companion Prostate MR solution can improve radiologists’ lesion assessment accuracy (especially less-experienced rads), while reducing reading times and lesion grading variability. 

The researchers had four radiologists (two experienced, two inexperienced) assess lesions in 172 prostate MRI exams, with and without AI support, finding that AI-RAD Companion Prostate MR improved:

  • The less-experienced radiologists’ performance, significantly (AUCs: 0.66 to 0.80 & 0.68 to 0.80)
  • The experienced rads’ performance, modestly (AUCs: 0.81 to 0.86 & 0.81 to 0.84)
  • Overall PI-RADS category and Gleason score correlations (r = 0.45 to 0.57)
  • Median reading times (157 to 150 seconds)

The study also highlights Siemens Healthineers’ emergence as an AI research leader, leveraging its relationship / funding advantages over AI-only vendors and its (potentially) greater focus on AI research than its OEM peers to become one of imaging AI’s most-published vendors (here are some of its other recent studies).

The Takeaway

Given the role that experience plays in radiologists’ prostate MRI accuracy, and noting prostate MRI’s historical challenges with variability, this study makes a solid case for AI-RAD Companion Prostate MR’s ability to improve rads’ diagnostic performance (without slowing them down). It’s also a reminder that Siemens Healthineers is serious about supporting its homegrown AI portfolio through academic research.

RevealDx & contextflow’s Lung CT Alliance

RevealDx and contextflow announced a new alliance that should advance the companies’ product and distribution strategies, and appears to highlight an interesting trend towards more comprehensive AI solutions.

The companies will integrate RevealDx’s RevealAI-Lung solution (lung nodule characterization) with contextflow’s SEARCH Lung CT software (lung nodule detection and quantification), creating a uniquely comprehensive lung cancer screening offering. 

contextflow will also become RevealDx’s exclusive distributor in Europe, adding to RevealDx’s global channel that includes a distribution alliance with Volpara (exclusive in Australia/NZ, non-exclusive in US) and a platform integration deal with Sirona

The alliance highlights contextflow’s new partner-driven strategy to expand SEARCH Lung CT beyond its image-based retrieval roots, coming just a few weeks after announcing an integration with Oxipit’s ChestEye Quality AI solution to identify missed lung nodules.

In fact, contextflow’s AI expansion efforts appear to be part of an emerging trend, as AI vendors work to support multiple steps within a given clinical activity (e.g. lung cancer assessments) or spot a wider range of pathologies in a given exam (e.g. CXRs):

  • Volpara has amassed a range of complementary breast cancer screening solutions, and has started to build out a similar suite of lung cancer screening solutions (including RevealDx & Riverain).
  • A growing field of chest X-ray AI vendors (Annalise.ai, Lunit, Qure.ai, Oxipit, Vuno) lead with their ability to detect multiple findings from a single CXR scan and AI workflow. 
  • Siemens Healthineers’ AI-RAD Companion Chest CT solution combines these two approaches, automating multiple diagnostic tasks (analysis, quantification, visualization, results generation) across a range of different chest CT exams and organs.

The Takeaway

contextflow and RevealDx’s European alliance seems to make a lot of sense, allowing contextflow to enhance its lung nodule detection/quantification findings with characterization details, while giving RevealDx the channel and lung nodule detection starting points that it likely needs.

The partnership also appears to represent another step towards more comprehensive and potentially more clinically valuable AI solutions, and away from the narrow applications that have dominated AI portfolios (and AI critiques) before now.

Cathay’s AI Underwriting

Cathay Life Insurance will use Lunit’s INSIGHT CXR AI solution to identify abnormalities in its applicants’ chest X-rays, potentially modernizing a manual underwriting process and uncovering a new non-clinical market for AI vendors.

Lunit INSIGHT CXR will be integrated into Cathay’s underwriting workflow, with the goals of enhancing its radiologists’ accuracy and efficiency, while improving Cathay’s underwriting decisions. 

Lunit and Cathay have reason to be optimistic about this endeavor, given that their initial proof of concept study found that INSIGHT CXR:

  • Improved Cathay’s radiologists’ reading accuracy by 20%
  • Reduced the radiologists’ overall reading time by up to 90%

Those improvements could have a significant labor impact, considering that Cathay’s rads review 30,000 CXRs every year. They might have an even greater business impact, noting the important role that underwriting accuracy has on policy profitability.

Lunit’s part of the announcement largely focused on its expansion beyond clinical settings, revealing plans to “become the driving force of digital innovation in the global insurance market” and to further expand its business into “various sectors outside the hospital setting.”

The Takeaway

Even if life insurers only require CXRs for a small percentage of their applicants (older people, higher value policies), they still review hundreds of thousands of CXRs each year. That makes insurers an intriguing new market segment for AI vendors, and makes you wonder what other non-clinical AI use cases might exist. However, it might also make radiologists who are still skeptical about AI concerned.

AI Experiences & Expectations

The European Society of Radiology just published new insights into how imaging AI is being used across Europe and how the region’s radiologists view this emerging technology.

The Survey – The ESR reached out to 27,700 European radiologists in January 2022 with a survey regarding their experiences and perspectives on imaging AI, receiving responses from just 690 rads.

Early Adopters – 276 the 690 respondents (40%) had clinical experience using imaging AI, with the majority of these AI users:

  • Working at academic and regional hospitals (52% & 37% – only 11% at practices)
  • Leveraging AI for interpretation support, case prioritization, and post-processing (51.5%, 40%, 28.6%)

AI Experiences – The radiologists who do use AI revealed a mix of positive and negative experiences:

  • Most found diagnostic AI’s output reliable (75.7%)
  • Few experienced technical difficulties integrating AI into their workflow (17.8%)
  • The majority found AI prioritization tools to be “very helpful” or “moderately helpful” for reducing staff workload (23.4% & 62.2%)
  • However, far fewer reported that diagnostic AI tools reduced staff workload (22.7% Yes, 69.8% No)

Adoption Barriers – Most coverage of this study will likely focus on the fact that only 92 of the surveyed rads (13.3%) plan to acquire AI in the future, while 363 don’t intend to acquire AI (52.6%). The radiologists who don’t plan to adopt AI (including those who’ve never used AI) based their opinions on:

  • AI’s lack of added value (44.4%)
  • AI not performing as well as advertised (26.4%)
  • AI adding too much work (22.9%)
  • And “no reason” (6.3%)

US Context – These results are in the same ballpark as the ACR’s 2020 US-based survey (33.5% using AI, only 20% of non-users planned to adopt within 5 years), although 2020 feels like a long time ago.

The Takeaway

Even if this ESR survey might leave you asking more questions (What about AI’s impact on patient care? How often is AI actually being used? How do opinions differ between AI users and non-users?), more than anything it confirms what many of us already know… We’re still very early in AI’s evolution, and there’s still plenty of performance and perception barriers that AI has to overcome.

Chest CT AI Efficiency

A new AJR study out of the Medical University of South Carolina showed that Siemens Healthineers’ AI-RAD Companion Chest CT solution significantly reduced radiologists’ interpretation times. Considering that radiologist efficiency is often sacrificed in order to achieve AI’s accuracy and prioritization benefits, this study is worth a deeper look.

MUSC integrated Siemens’ AI-RAD Companion Chest CT into their PACS workflow, providing its radiologists with automated image analysis, quantification, visualization, and results for several key chest CT exams.

Three cardiothoracic radiologists were randomly assigned chest CT exams from 390 patients (195 w/ AI support), finding that the average AI-supported interpretations were significantly faster. . .

  • For the combined readers – 328 vs. 421 seconds 
  • For each individual radiologist – 289 vs. 344; 449 vs. 649; 281 vs. 348 seconds
  • For contrast-enhanced scans – 20% faster
  • For non-contrast scans – 24.2% faster
  • For negative scans – 26.4% faster
  • For positive scans without significant new findings – 25.7% faster
  • For positive scans with significant new findings – 20.4% faster

Overall, the solution allowed a 22.1% average reduction in radiologist interpretation times, or an hour per typical workday.

The authors didn’t explore the solution’s impact on radiologist accuracy, noting that AI accuracy has already been covered in plenty of previous studies. In fact, members of this same MUSC research team previously showed that AI-RAD Companion Chest CT identified abnormalities more accurately than many of its radiologists.

The Takeaway

Out of the hundreds of AI studies we see each year, very few have tried to measure efficiency gains and even fewer have shown that AI actually reduces radiologist interpretation times.
Given the massive exam volumes that radiologists are facing and the crucial role efficiency plays in AI ROI calculations, these results are particularly encouraging, and suggest that AI can indeed improve both accuracy and efficiency.

Burdenless Incidental AI

A team of IBM Watson Health researchers developed an interesting image and text-based AI system that could significantly improve incidental lung nodule detection, without being “overly burdensome” for radiologists. That seems like a clinical and workflow win-win for any incidental AI system, and makes this study worth a deeper look.

Watson Health’s R&D-stage AI system automatically detects potential lung nodules in chest and abdominal CTs, and then analyzes the text in corresponding radiology reports to confirm whether they mention lung nodules. In clinical practice, the system would flag exams with potentially missed nodules for radiologist review.

The researchers used the AI system to analyze 32k CTs sourced from three health systems in the US and UK. They then had radiologists review the 415 studies that the AI system flagged for potentially missed pulmonary nodules, finding that it:

  • Caught 100 exams containing at least one missed nodule
  • Flagged 315 exams that didn’t feature nodules (false positives)
  • Achieved a 24% overall positive predictive value
  • Produced just a 1% false positive rate

The AI system’s combined ability to detect missed pulmonology nodules while “minimizing” radiologists’ re-reading labor was enough to make the authors optimistic about this type of AI. They specifically suggested that it could be a valuable addition to Quality Assurance programs, improving patient care while avoiding the healthcare and litigation costs that can come from missed findings.

The Takeaway

Watson Health’s new AI system adds to incidental AI’s growing momentum, joining a number of research and clinical-stage solutions that emerged in the last two years. However, this system’s ability to cross-reference radiology report text and apparent ability to minimize false positives are relatively unique. 

Even if most incidental AI tools aren’t ready for everyday clinical use, and their potential to increase re-read labor might be alarming to some rads, these solutions’ ability to catch earlier stage diseases and minimize the impact of diagnostic “misses” could earn the attention of a wide range of healthcare stakeholders going forward.

Imaging AI’s Unseen Potential

Amid the dozens of imaging AI papers and presentations that came out over the last few weeks were three compelling new studies highlighting how much “unseen” information AI can extract from medical images, and the massive impact this information could have. 

Imaging-Led Population Health – An excellent presentation from Ayis Pyrros, MD placed radiology at the center of healthcare’s transition to value-based care and population health, highlighting the AI training opportunities that will come with more value-based care HCC codes and imaging AI’s untapped potential for early disease detection and management. Dr. Pyrros specifically emphasized chest X-ray’s potential given the exam’s ubiquity (26M Medicare CXRs in 2021), CXR AI’s ability to predict outcomes (e.g. mortality, comorbidities, hospital stays), and how opportunistic AI screening can/should support proactive care that benefits both patients and health systems.

  • Healthcare’s value-based overhaul has traditionally been seen as a threat to radiology’s fee-for-service foundations. Even if that might still be true from a business model perspective, Dr. Pyrros makes it quite clear that the shift to value-based care could make radiology even more important — and importance is always good for business.

AI Race Detection – The final peer-reviewed version of the landmark study showing that AI models can accurately predict patient race was officially published, further confirming that AI can detect patients’ self-reported race by analyzing medical image features. The new paper showed that AI very accurately detects patient race across modalities and anatomical regions (AUCs: CXRs 0.91 – 0.99, chest CT 0.89 – 0.96, mammography 0.81), without relying on proxies or imaging-related confounding features (BMI, disease distribution, and breast density all had ≤0.61 AUCs).

  • If imaging AI models intended for clinical tasks can identify patients’ races, they could be applying the same racial biomarkers to diagnosis, thus reproducing or exacerbating healthcare’s existing racial disparities. That’s an important takeaway whether you’re developing or adopting AI.

CXR Cost Predictions – The smart folks at the UCSF Center for Intelligent Imaging developed a series of CXR-based deep learning models that can predict patients’ future healthcare costs. Developed with 21,872 frontal CXRs from 19,524 patients, the best performing models were able to relatively accurately identify which patients would have a top-50% personal healthcare cost after one, three, and five years (AUCs: 0.806, 0.771, 0.729). 

  • Although predicting which patients will have higher costs could be useful on its own, these findings also suggest that similar CXR-based DL models could be used to flag patients who may deteriorate, initiate proactive care, or support healthcare cost analysis and policies.

The Case for Algorithmic Audits

A new Lancet Digital Health study could have become one of the many “AI rivals radiologists” papers that we see each week, but it instead served as an important lesson that traditional performance tests might not prove that AI models are actually safe for clinical use.

The Model – The team developed their proximal femoral fracture detection DL model using 45.7k frontal X-rays performed at Australia’s Royal Adelaide Hospital (w/ 4,861 fractures).

The Validation – They then tested it against a 4,577-exam internal set (w/ 640 fractures), 400 of which were also interpreted by five radiologists (w/ 200 fractures), and against an 81-image external validation set from Stanford.

The Results – All three tests produced results that a typical study might have viewed as evidence of high-performance: 

  • The model outperformed the five radiologists (0.994 vs. 0.969 AUCs)
  • It beat the best performing radiologist’s sensitivity (95.5% vs. 94.5%) and specificity (99.5% vs 97.5%)
  • It generalized well with the external Stanford data (0.980 AUC)

The Audit – Despite the strong results, a follow-up audit revealed that the model might make some predictions for the wrong reasons, suggesting that it is unsafe for clinical deployment:

  • One false negative X-ray included an extremely displaced fracture that human radiologists would catch
  • X-rays featuring abnormal bones or joints had a 50% false negative rate, far higher than the reader set’s overall false negative rate (2.5%)
  • Salience maps showed that AI decisions were almost never based on the outer region of the femoral neck, even with images where that region was clinically relevant (but it still often made the right diagnosis)
  • The model scored a high AUC with the Stanford data, but showed a substantial model operating point shift

The Case for Auditing – Although the study might have not started with this goal, it ended up becoming an argument for more sophisticated preclinical auditing. It even led to a separate paper outlining their algorithmic auditing process, which among other things suggested that AI users and developers should co-own audits.

The Takeaway

Auditing generally isn’t the most exciting topic in any field, but this study shows that it’s exceptionally important for imaging AI. It also suggests that audits might be necessary for achieving the most exciting parts of AI, like improving outcomes and efficiency, earning clinician trust, and increasing adoption.A new Lancet Digital Health study could have become one of the many “AI rivals radiologists” papers that we see each week, but it instead served as an important lesson that traditional performance tests might not prove that AI models are actually safe for clinical use.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!