Imaging AI’s Unseen Potential

Amid the dozens of imaging AI papers and presentations that came out over the last few weeks were three compelling new studies highlighting how much “unseen” information AI can extract from medical images, and the massive impact this information could have. 

Imaging-Led Population Health – An excellent presentation from Ayis Pyrros, MD placed radiology at the center of healthcare’s transition to value-based care and population health, highlighting the AI training opportunities that will come with more value-based care HCC codes and imaging AI’s untapped potential for early disease detection and management. Dr. Pyrros specifically emphasized chest X-ray’s potential given the exam’s ubiquity (26M Medicare CXRs in 2021), CXR AI’s ability to predict outcomes (e.g. mortality, comorbidities, hospital stays), and how opportunistic AI screening can/should support proactive care that benefits both patients and health systems.

  • Healthcare’s value-based overhaul has traditionally been seen as a threat to radiology’s fee-for-service foundations. Even if that might still be true from a business model perspective, Dr. Pyrros makes it quite clear that the shift to value-based care could make radiology even more important — and importance is always good for business.

AI Race Detection – The final peer-reviewed version of the landmark study showing that AI models can accurately predict patient race was officially published, further confirming that AI can detect patients’ self-reported race by analyzing medical image features. The new paper showed that AI very accurately detects patient race across modalities and anatomical regions (AUCs: CXRs 0.91 – 0.99, chest CT 0.89 – 0.96, mammography 0.81), without relying on proxies or imaging-related confounding features (BMI, disease distribution, and breast density all had ≤0.61 AUCs).

  • If imaging AI models intended for clinical tasks can identify patients’ races, they could be applying the same racial biomarkers to diagnosis, thus reproducing or exacerbating healthcare’s existing racial disparities. That’s an important takeaway whether you’re developing or adopting AI.

CXR Cost Predictions – The smart folks at the UCSF Center for Intelligent Imaging developed a series of CXR-based deep learning models that can predict patients’ future healthcare costs. Developed with 21,872 frontal CXRs from 19,524 patients, the best performing models were able to relatively accurately identify which patients would have a top-50% personal healthcare cost after one, three, and five years (AUCs: 0.806, 0.771, 0.729). 

  • Although predicting which patients will have higher costs could be useful on its own, these findings also suggest that similar CXR-based DL models could be used to flag patients who may deteriorate, initiate proactive care, or support healthcare cost analysis and policies.

AI-Assisted Radiographers

A new European Radiology study provided what might be the first insights into whether AI can allow radiographers to independently read lung cancer screening exams, while alleviating the resource challenges that have slowed LDCT screening program rollouts.

This is the type of study that makes some radiologists uncomfortable, but its results suggest that rads’ role in lung cancer screening remains very secure.

The researchers had two trained UK-based radiographers read 716 LDCT exams using a computer-assisted detection AI solution (158 w/ significant pulmonary nodules), and compared them with interpretations from radiologists who didn’t have CADe assistance.

The radiographers had significantly lower sensitivity than the radiologists (68% & 73.7%; p < 0.001), leading to 61 false negative exams. However, the two CADe-assisted radiographers did achieve:

  • Good sensitivity with cancers confirmed from baseline scans – 83.3% & 100%
  • Relatively high specificity – 92.1% & 92.7%
  • Low false-positive rates – 7.9% and 7.3%

The CADe AI solution might have both helped and hurt the radiographers’ performance, as CADe missed 20 of the radiographers’ 40 false negative nodules, and four of their seven false negative malignant nodules. 

Even as LDCT CADe tools become far more accurate, they might not be able to fill in radiographers’ incidental findings knowledge gap. The radiographers achieved either “good” or “fair” interobserver agreement rates with radiologists for emphysema and CAC findings, but the variety of other incidental pathologies was “too broad to reasonably expect radiographers to detect and interpret.”

The Takeaway
Although CADe-assisted radiographer studies might concern some radiologists, this seems like an important aspect of AI to understand given the workload demands that come with lung cancer screening programs, and the need to better understand how clinicians and AI can work together. 

Good thing for any concerned radiologists, this study shows that LDCT reporting is too complex and current CADe solutions are too limited for CADe-equipped radiographers to independently read LDCTs… “at least for the foreseeable future.”

BAMF & United Imaging’s Precision Medicine Milestone

BAMF Health took a big step in its precision medicine strategy, installing United Imaging’s uEXPLORER total-body PET/CT scanner as it prepares to open its theranostics treatment center. 

Founded in 2018, BAMF Health (Bold Advanced Medical Future) has applied a unique approach to developing advanced treatments, combining the world’s “most advanced” radiopharmacy, its proprietary AI platform, and top molecular imaging technology to deliver hyper-personalized and targeted treatments.

Installing United Imaging’s uEXPLORER total-body PET/CT scanner represents a key final addition to BAMF Health’s precision medicine stack, and makes it the first institution in the US using total-body PET for theranostics. More importantly, the uEXPLORER will allow BAMF Health to deliver more effective and efficient theranostics treatments by:

  • Imaging patients’ entire bodies in a single scan (vs. “eyes to thighs”)
  • Detecting and targeting signs of cancer smaller than two millimeters (vs. 1 cm)
  • Scanning patients in just one minute (vs. up to 1hr)
  • Reducing radiation dosage by up to 40x

BAMF Health’s launch might also represent an early theranostics paradigm shift, highlighting the potential role of private clinics (vs. academic/large institutions) and total-body PET/CT systems (vs. “whole”) with the advanced therapy.

BAMF Health will begin treating patients for prostate cancer and neuroendocrine tumors at its Michigan-based clinic this summer, but plans to deliver a wide range of personalized treatments that extend well beyond cancer in the future (e.g. Alzheimer’s, Parkinson’s, cardiac diseases, endometriosis, chronic pain) and treat patients from around the country.

The Takeaway

Although BAMF Health still has a lot to prove, its upcoming clinical launch might be a key milestone in the evolution of theranostics and molecular imaging.

The Radiologist Skill Gap

A new Stanford study revealed that diagnostic variations are largely due to differences in radiologist skill levels (not work styles/preferences, etc.), suggesting that physician skill gaps might represent a major source of healthcare waste, and warning that efforts to standardize care could lead to even worse results. 

The researchers analyzed 4.67M CXR interpretations from patients with suspected pneumonia, finding that radiologist skill level accounted for 39% of variations in positive diagnoses (both true & false) and 78% of variations in missed diagnoses. Those variations had a major impact on patient care:

  • Reassigning a patient from a radiologist in the 10th to 90th percentile for positive diagnostic rates would increase their probability of receiving a positive diagnosis from 8.9% to 12.3%.
  • Reassigning a patient from a radiologist in the 10th to 90th percentile for missed diagnosis rates would increase their probability of receiving a false negative from 0.2% to 1.8%.

Perhaps counterintuitively, they found that the radiologists who were more likely to diagnose patients with pneumonia were also more likely to submit false negative diagnoses, suggesting that less skilled radiologists are responsible for an outsized share of unnecessary, delayed, and inconsistent care.

Skill can be hard to define, but the researchers found that the “most skilled radiologists” were generally older and more experienced, wrote shorter reports, and spent more time on each report.

The researchers weren’t specifically trying to understand radiologist skill variations with this study, and their main takeaway is that we might have to change our assumptions about how to fix the U.S. healthcare system:

  • Healthcare inefficiency might have more to do with physician performance, and less to do with other commonly cited issues (e.g. misaligned payor/provider incentives) 
  • Relying on standardized approaches to equalize patient care and address cost variations might actually lead to worse care and higher costs

The Takeaway

Most readers probably aren’t surprised to hear that some radiologists are way more accurate than others, and that diagnostic skill increases with age/experience. However, this study gives new evidence supporting the value of quality improvement efforts, and could make it easier to demonstrate how radiology products/processes that reduce variability but don’t generate revenue (like AI…) might deliver clearer ROI than some might think.

iSono Health’s Wearable Breast Ultrasound

iSono Health announced the FDA clearance of its ATUSA automated wearable 3D breast ultrasound system, a first-of-its-kind device that taps into some of the biggest trends in imaging.

The wearable ATUSA system automatically captures the entire breast volume, producing standardized/repeatable breast ultrasound exams in two minutes without requiring a trained operator. The scanner combines with iSono’s ATUSA Software Suite to support real-time 2D visualization, advanced 3D visualization and localization, and AI integration (including iSono’s forthcoming AI tools). That positions the ATUSA for a range of interesting use cases:

  • Enhancing routine exams in primary care and women’s health clinics
  • Expanding breast imaging access in developing countries
  • Supporting longitudinal monitoring for higher-risk women
  • Allowing remote breast cancer monitoring

iSono might have to overcome some pretty big biases regarding how and where providers believe breast exams are supposed to take place. However, the ATUSA’s intended use cases and value propositions have already been gaining momentum across imaging.

  • The rapid expansion of handheld POCUS systems and AI guidance solutions has made ultrasound an everyday tool for far more clinicians than just a few years ago.
  • Wearable imaging continues to be an innovation hotspot, including a range of interesting projects that are developing imaging helmets, patches, and even a few other wearable breast ultrasound systems.
  • There’s a growing focus on addressing the developing world’s imaging gap with portable imaging systems.
  • We’re seeing greater momentum towards technology-enabled enhancements to routine breast exams, including Siemens Healthineers’ recent move to distribute UE LifeSciences’ iBreastExam device (uses vibrations, not imaging).
  • At-home imaging is becoming a far more realistic idea, with commercial initiatives from companies like Butterfly and Pulsenmore in place, and earlier-stage efforts from other breast ultrasound startups. 

The Takeaway

iSono Health has a long way to go before it earns an established role in breast cancer pathways. However, the ATUSA’s use cases and value proposition are well aligned with some of imaging’s biggest trends, and there’s still plenty of demand to improve breast imaging access and efficiency across the world.

Chest Pain Implications

The major cardiac imaging societies weighed-in on the AHA/ACC’s new Chest Pain Guidelines, highlighting the notable shifts coming to cardiac imaging, and the adjustments they could require.

The cardiac CT and MRI societies took a victory lap, highlighting CCTA and CMR’s now-greater role in chest pain diagnosis, while forecasting that the new guideline will bring:

  • Increased demand for cardiac CT & MR exams and scanners
  • A need for more cardiac CT & MR staff, training, and infrastructure
  • Requests for more cardiac CT & MR funding and reimbursements
  • More collaborations across radiology, cardiology, and emergency medicine

The angiography and nuclear cardiology societies were less celebratory. Rather than warning providers to start buying more scanners and training more techs (like CT & MR), they focused on defending their roles in chest pain diagnosis, reiterating their advantages, and pointing out how the new guidelines might incorrectly steer patients to unnecessary or insufficient tests.

FFR-CT’s new role as a key post-CT diagnostic step made headlines when the guidelines came out, but the cardiac imaging societies don’t seem to be ready to welcome the AI approach. The nuclear cardiology and radiology societies called out FFR-CT’s low adoption and limited supporting evidence, while the SCCT didn’t even mention FFR-CT in its statement (and they’re the cardiac CT society!).

Echocardiography maintained its core role in chest pain diagnosis, but the echo society clearly wanted more specific guidelines around who can perform echo and how well they’re trained to perform those exams. That reaction is understandable given the sonographer workforce challenges and the expansion of cardiac POCUS to new clinical roles (w/ less echo training), although some might argue that echo AI tools might help address these problems.

The Takeaway

Imaging and shared decision-making play a prominent role in the new chest pain guidelines, which seems like good news for patient-specific care (and imaging department/vendor revenues), but it also leaves room for debate within the clinic and across clinical societies. 

The JACC seems to understand that it needs to clear up many of these gray areas in future versions of the chest pain guidelines. Until then, it will be up to providers to create decision-making and care pathways that work best for them, and evolve their teams and technologies accordingly.

Chest CT’s Untapped Potential

A new AJR study out of Toronto General Hospital highlighted the largely-untapped potential of non-gated chest CT CAC scoring, and the significant impact it could have with widespread adoption.

Current guidelines recommend visual CAC evaluations with all non-gated non-contrast chest CTs. However, these guidelines aren’t consistently followed and they exclude contrast-enhanced chest CTs.

The researchers challenged these practices, performing visual CAC assessments on 260 patients’ non-gated chest CT exams (116 contrast-enhanced, 144 non-contrast) and comparing them to the same patients’ cardiac CT CAC scores (performed within 12-months) and ~6-year cardiac event outcomes.

As you might expect, visual contrast-enhanced and non-contrast chest CT CAC scoring:  

  • Detected CAC with high sensitivity (83% & 90%) and specificity (both 100%)
  • Accurately predicted major cardiac events (Hazard ratios: 4.5 & 3.4)
  • Had relatively benign false negatives (0 of 26 had cardiac events)
  • Achieved high inter-observer agreement (κ=0.89 & 0.95)

The Takeaway

Considering that CAC scores were only noted in 37% of the patients’ original non-contrast chest CT reports and 23% of their contrast-enhanced chest CT reports, this study adds solid evidence in favor of more widespread CAC score reporting in non-gated CT exams.

That might also prove to be good news for the folks working on opportunistic CAC AI solutions, noting that AI has (so far) seen the greatest adoption when it supports processes that most radiologists are actually doing.

Radiology’s Smart New Deal

A new Journal of Digital Imaging editorial from UCLA radiology chair Dieter R. Enzmann, MD proposed a complete overhaul of how radiology reports are designed and distributed, in a way that should make sense to radiology outsiders but might make some folks within radiology uncomfortable.

Dr. Enzmann’s “Smart New Deal” proposes that radiology reports and reporting workflows should evolve to primarily support smartphone-based usage for both patients and physicians, ensuring that reports are:

  • Widely accessible 
  • Easily navigated and understood 
  • Built with empathy for current realities (info overload, time scarcity, mobility)
  • And widely utilized… because they are accessible, simple, and understandable

To achieve those goals, Dr. Enzmann proposes a “creative destruction” of our current reporting infrastructure, helped by ongoing improvements in foundational technologies (e.g. cloud, interoperability) and investments from radiology’s tech leaders (or from their future disruptors).

Despite Dr. Enzmann’s impressive credentials, the people of radiology might have a hard time coming to terms with this vision, given that:

  • Radiology reports are mainly intended for referring physicians, and referrers don’t seem to be demanding simplified phone-native reports (yet)
  • This is a big change given how reports are currently formatted and accessed
  • Patient-friendly features that require new labor often face resistance
  • It might make more sense for this smartphone-centric approach to cover patients’ entire healthcare journeys (not just radiology reports)

The Takeaway

It can be hard to envision a future when radiology reports are primarily built for smartphone consumption.

That said, few radiologists or rad vendors would argue against other data-based industries making sure their products (including their newsletters) are accessible, understandable, and actionable. Many might also recognize that some of the hottest imaging segments are already smartphone-native (e.g. AI care coordination solutions, PocketHealth’s imaging sharing, handheld POCUS), while some of the biggest trends in radiology focus on making reports easier for patients and referrers to consume.

Smartphone-first reporting might not be a sure thing, but the trends we’re seeing do suggest that efforts to achieve Dr. Enzmann’s core reporting goals will be rewarded no matter where technology takes us.

Cleerly’s Downstream Effect

A new AJR study showed that Cleerly’s coronary CTA AI solution detects obstructive coronary artery disease (CAD) more accurately than myocardial perfusion imaging (MPI), and could substantially reduce unnecessary invasive angiographies. 

The researchers used Cleerly to analyze Coronary CTAs from 301 patients with stable myocardial ischemia symptoms who also received stress MPI exams. They then compared these Cleerly CCTA and MPI results with the patients’ invasive angiography exams, and quantitative coronary angiography (QCA) and fractional flow reserve (FFR) measurements. 

The Cleerly-based coronary CTA results significantly outperformed MPI for predicting stenosis and caught cases that MPI-based ischemia results didn’t flag:

  • Cleerly AI detected more patients with obstructive stenosis (≥50%; 0.88 vs. 0.66 AUCs)
  • Cleerly AI identified more patients with severe stenosis (≥70%; 0.92 vs. 0.81 AUCs)
  • Cleerly AI detected far more patients with signs of ischemia in FFR (<0.80; 0.90 vs. 0.71 AUCs) 
  • Out of 102 patients with negative MPI ischemia results, Cleerly identified 55 patients with obstructive stenosis and 20 with severe stenosis (54% & 20%)
  • Out of 199 patients with positive MPI ischemia results, Cleerly identified 46 patients with non-obstructive stenosis (23%)

MPI and Cleerly-based CCTA analysis also worked well together. The combination of ≥50% stenosis via Cleerly and ischemia in MPI achieved 95% sensitivity and 63% specificity for detecting serious stenosis (vs. 74% & 43% using QCA measurements).

Based on those results, pathways that use a Cleerly AI-based CCTA benchmark of ≥70% stenosis to approve patients for invasive angiography would reduce invasive angiography utilization by 39%. Meanwhile, workflows requiring a positive MPI ischemia result and CCTA Cleerly AI benchmark of ≥70% would reduce invasive angiography utilization by 49%.

The Takeaway
We’re seeing strong research and policy momentum towards using coronary CTA as the primary CAD diagnosis method and reducing reliance on invasive angiography. This and other recent studies suggest that CCTA AI solutions like Cleerly could play a major role in that CCTA-first shift.

NYU’s Video Reporting Experiment

A new AJR study out of NYU just provided what might be the first significant insights into how patient-friendly video reports might impact radiologists and patients.

Leveraging a new Visage 7 video feature and 3D rendering from Siemens Healthineers, NYU organized a four-month study that encouraged and evaluated patient-centered video reports (w/ simple video + audio explanations). 

During the study period, just 105 out of 227 NYU radiologists created videos, resulting in 3,763 total video reports. The videos were included within NYU’s standard radiology reports and made available via its patient portal.

The video reports added an average of 4 minutes of recording time to radiologists’ workflows (± 2:21), with abnormal reports understandably taking longer than normal reports (5:30 vs. 4:15; still statistically similar). The authors admitted that video creation has to get faster in order to achieve clinical adoption, revealing plans to use standardized voice macros to streamline this process.

Patients viewed just 864 unique video reports, leaving 2,899 videos unviewed. However, when NYU moved the video links above the written section late in the study period, the share of patients who watched their videos jumped from 20% to 40%. Patients who watched the videos also really liked them:

  • Patients scored their overall video report experiences a 4.7 out of 5
  • The videos’ contribution to patients’ diagnostic understanding also scored 4.7 of 5
  • 56% of patients reported reduced anxiety due to the videos (via 1% increased) 
  • 91% of patients preferred video + written reports (vs. 2% w/ written-only)

Although not the videos’ intended audience, referring physicians viewed 214 unique video reports, and anecdotes suggested that the videos helped referrers explain findings to their patients.

The Takeaway

We’ve covered plenty of studies showing that patients want to review their radiology reports, but struggle to understand them. We’ve also seen plenty of suggestions that radiologists want to improve their visibility to patients and highlight their role in patient care.

This study shows that video reports could satisfy both of those needs, while confirming that adopting video reporting wouldn’t require significant infrastructure changes (if your PACS supports video), but they would add four minutes to radiologist reporting workflows.

That doesn’t suggest a major increase in video reporting will come any time soon, especially considering most practices/departments’ focus on efficiency, but it does make future video reporting adoption seem a lot more realistic (or at least possible).

Get every issue of The Imaging Wire, delivered right to your inbox.

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team