SIIM 2022 Recap

The first in-person SIIM meeting since COVID hit is officially a wrap, delivering the latest in informatics and a family reunion vibe that might have surpassed any other imaging event. Here’s the top takeaways from the biggest imaging informatics conference of the year.

Crowds & Conversations – We understand there were 300 to 400 on-site attendees at SIIM 2022 (excluding exhibitors), with far more attendees in the educational sessions and afterparties than the exhibit hall booths. Still, it was clear that there’s no better place for informatics leaders and vendors to get together than SIIM.

Big Cloud – The shift to the cloud felt more inevitable than ever last week. The cloud was at the center of nearly every vendor and providers’ informatics roadmaps, while the AWS/GCP/Azure “healthcare cloud land grab” appears to be having an underrated influence on cloud adoption. That said, SIIM22’s cloud PACS conversations hadn’t changed much from previous years…

  • Everyone still agrees about the cloud’s security and administrative upsides
  • PACS vendors are still debating cloud native vs. cloud enabled (…and questioning whether providers know the difference or care as much as they do)
  • Nobody is willing to adopt cloud at the expense of PACS performance
  • And because of that, hybrid cloud remains the realistic starting point for many providers

Integrating AI – AI remained a major theme at SIIM, although most conversations focused on how to adopt and integrate AI (and then get ROI), rather than how AI can improve diagnosis. That probably explains why the exhibit hall featured far more AI distributors (AI marketplaces, PACS AI platforms, etc.) than AI developers, and it serves as a good reminder for AI vendors to continue improving their integration capabilities.

Productivity Hacks – Unsurprisingly, radiologist productivity was a common theme through the presentations and exhibit hall booths, ranging from the ultra-logical (fast PACS, administrative AI) to the ultra-ambitious (single-vendor unified imaging IT systems). 

Inconsistent Imaging – This might be old news to many of you, but I was amazed to learn how far many organizations are from achieving informatics best practice. I heard a lot about patched together workflows, outdated PACS versions, inconsistent site setups, antiquated imaging sharing, and narrowly-defined enterprise imaging. The silver lining to that is there’s plenty of room for improvement, but it also suggests that some imaging organizations will need a lot of work before they’re technologically prepared for the next-gen stuff we talked about all week.

The Takeaway

SIIM 2022 made it abundantly clear that there are seismic changes coming to imaging informatics, and even if those changes will probably take longer than some might hope, their impact might be greater than many of us expect. There’s also plenty of opportunities to improve radiology workflows in the short-term, and some of the smartest people in healthcare are ready to deliver these improvements.

Burdenless Incidental AI

A team of IBM Watson Health researchers developed an interesting image and text-based AI system that could significantly improve incidental lung nodule detection, without being “overly burdensome” for radiologists. That seems like a clinical and workflow win-win for any incidental AI system, and makes this study worth a deeper look.

Watson Health’s R&D-stage AI system automatically detects potential lung nodules in chest and abdominal CTs, and then analyzes the text in corresponding radiology reports to confirm whether they mention lung nodules. In clinical practice, the system would flag exams with potentially missed nodules for radiologist review.

The researchers used the AI system to analyze 32k CTs sourced from three health systems in the US and UK. They then had radiologists review the 415 studies that the AI system flagged for potentially missed pulmonary nodules, finding that it:

  • Caught 100 exams containing at least one missed nodule
  • Flagged 315 exams that didn’t feature nodules (false positives)
  • Achieved a 24% overall positive predictive value
  • Produced just a 1% false positive rate

The AI system’s combined ability to detect missed pulmonology nodules while “minimizing” radiologists’ re-reading labor was enough to make the authors optimistic about this type of AI. They specifically suggested that it could be a valuable addition to Quality Assurance programs, improving patient care while avoiding the healthcare and litigation costs that can come from missed findings.

The Takeaway

Watson Health’s new AI system adds to incidental AI’s growing momentum, joining a number of research and clinical-stage solutions that emerged in the last two years. However, this system’s ability to cross-reference radiology report text and apparent ability to minimize false positives are relatively unique. 

Even if most incidental AI tools aren’t ready for everyday clinical use, and their potential to increase re-read labor might be alarming to some rads, these solutions’ ability to catch earlier stage diseases and minimize the impact of diagnostic “misses” could earn the attention of a wide range of healthcare stakeholders going forward.

MRI Accessibility Advantage

Memorial MRI and Diagnostic’s COO Todd Greene starred in a recent Aunt Minnie webinar, detailing the role MRI accessibility plays in the Texas imaging group’s strategy, and sharing some very relevant takeaways for imaging providers and vendors.

Founded in 2001, Memorial MRI and Diagnostic (MMD) operates 16 imaging centers across Texas, including eight in greater Houston and eight Dallas-area locations added through its 2021 acquisition of Prime Diagnostic Imaging. 

  • MMD’s strategy focuses on integrating its imaging centers within their local communities, making patient access and referring physician relationships particularly important.

In addition to proximity to patients, MMD’s MRI accessibility strategy historically focused on maintaining a fleet of open bore 1.5T MRI scanners to accommodate larger and claustrophobic patients. 

  • This is especially important given that many of MMD’s patients are “Texas sized” or don’t realize they’re claustrophobic until the scan begins. 

That strategy started to change when MMD installed United Imaging’s 3T uMR OMEGA ultra-wide-bore (75 cm), allowing it to scan larger and claustrophobia-prone patients (plus all other patients) without open MRIs’ scan speed and image quality tradeoffs. 

  • The uMR OMEGA was MMD’s first 3T MRI at any of MMD’s imaging centers, although Greene expects its patient and referrer-friendly advantages to drive a continued shift towards wide-bore 3T MRI systems.

Greene also detailed Memorial MRI’s alliance with United Imaging (the webinar’s sponsor), specifically highlighting the scalability of UIH’s “Software for Life” (scanners automatically updated with future software) and “All-In” (scanners include all possible features/packages) policies.

As the webinar wrapped up, Greene warned imaging centers not to blindly rely on what has worked in the past, predicting that “ease of access is what is going to shape the future of healthcare.” 

The Takeaway

We get plenty of insights from the medical center side of radiology, but it’s still rare to hear from imaging center chains. That makes MDD’s insights particularly useful for the many regional imaging providers who’d like to improve MRI accessibility (without open MRI’s tradeoffs) and for MRI OEMs looking to drive 3T MRI adoption in an imaging provider segment that historically favored 1.5T systems.

Autonomous & Ultrafast Breast MRI

A new study out of the University of Groningen highlighted the scanning and diagnostic efficiency advantages that might come from combining ultrafast breast MRI with autonomous AI. That might make some readers uncomfortable, but the fact that autonomous AI is one of 2022’s most controversial topics makes this study worth some extra attention.

The researchers used 837 “TWIST” ultrafast breast MRI exams from 488 patients (118 abnormal breasts, 34 w/ malignant lesions) to train and validate a deep learning model to detect and automatically exclude normal exams from radiologist workloads. They then tested it against 178 exams from 149 patients from the same institution (55 abnormal, 30 w/ malignant lesions), achieving a 0.81 AUC.

When evaluated at a conservative 0.25 detection error threshold, the DL model:

  • Achieved 98% sensitivity and negative predictive values
  • Misclassified one abnormal exam as normal (out of 55)
  • Correctly classified all exams with malignant lesions
  • Would have reduced radiologists’ exam workload by 6.2% (-15.7% at breast level)

When evaluated at a 0.37 detection error threshold, the model:

  • Achieved 95% sensitivity and a 97% negative predictive value (still high)
  • Misclassified three abnormal exams (3 of 55), including one malignant lesion
  • Would have reduced radiologists’ exam workload by 15.7% (-30.6% at breast level)

These radiologist workflow improvements would complement the TWIST ultrafast MRI sequence’s far shorter magnet time than current protocols (2 vs. 20 minutes), while the DL model could further reduce scan times by automatically ending exams once they are flagged as normal. 

The Takeaway

Even if the world might not be ready for this type of autonomous AI workflow, this study is a good example of how abbreviated MRI protocols and AI could be able to improve both imaging team and radiologist efficiency. It’s also the latest in a series of studies exploring how AI could exclude normal scans from radiologist workflows, suggesting that the development and design of this type of autonomous AI will continue to mature.

SubtlePET Validations

Two new studies out of France added to the growing field of evidence supporting Subtle Medical’s SubtlePET solution, with each confirming that it allows shorter-duration PET exams without affecting image quality. 

The first study, published in EJNMMI Physics, proclaimed SubtlePET “ready to be used in clinical practice for half-time or half-dose acquisitions” after it restored 18F-FDG PET/CT exams from three different scanners without impacting diagnostic confidence.

The researchers performed 18F-FDG PET/CT exams on 110 patients, producing full-acquisition, 50%-reduced, and 66%-reduced images (PET100, PET50, and PET33). They then denoised the images with SubtlePET and had two senior nuclear physicians evaluate them, finding that SubtlePET improved:

  • PET33 image quality from 16.7% to 86.7% “interpretable” & 0% to 26.7% “good”
  • PET50 image quality from 83.6% to 100% “interpretable” & 1.8% to 84.5% “good”
  • High-BMI patients’ PET100 exams from 60% to 80% “good” image quality (both were 100% interpretable)

The second study out of France’s Baclesse Cancer Center further confirmed that SubtlePET preserves 18F-FDG PET image quality with half-duration exams. 

The researchers performed 90-second and 45-second 18F-FDG PET/CT exams on 195 patients (PET90 & PET45), and then used SubtlePET to denoise the 45-second images, finding that:  

  • PET45 exams produced mediocre image quality (8% poor, 68% moderate) and achieved an 88.7% lesion concordance rate with PET90
  • After SubtlePET enhancement, PET45’s image quality matched PET90 (both 92% good, 8% moderate) and achieved a 97.7% lesion concordance rate with PET90
  • 7 of the discordant lesions (0.8%) were only detected with PET90 and 13 (1.5%) were exclusively detected with SubtlePET-enhanced PET45 images

The Takeaway
May was a particularly big research month, but SubtlePET has been on an academic hot streak for over a year, including at least three previous studies validating its performance with lower radiotracer dosage and faster acquisition times.

Subtle Medical’s marketing currently appears to focus on SubtlePET’s support for shorter scans, but it’s easy to see how patients and clinicians would welcome both shorter scans and lower radiotracer dosage, and the research increasingly seems to validate both use cases.

Automating Stress Echo

A new JACC study showed that Ultromics’ EchoGo Pro AI solution can accurately classify stress echocardiograms, while improving clinician performance with a particularly challenging and operator-dependent exam. 

The researchers used EchoGo Pro to independently analyze 154 stress echo studies, leveraging the solution’s 31 image features to identify patients with severe coronary artery disease with a 0.927 AUC (84.4% sensitivity; 92.7% specificity). 

EchoGo Pro maintained similar performance with a version of the test dataset that excluded the 38 patients with known coronary artery disease or resting wall motion abnormalities (90.5% sensitivity; 88.4% specificity).

The researchers then had four physicians with different levels of stress echo experience analyze the same 154 studies with and without AI support, finding that the EchoGo Pro reports:

  • Improved the readers’ average AUC – 0.877 vs. 0.931
  • Increased their mean sensitivity – 85% vs. 95%
  • Didn’t hurt their specificity – 83.6% vs. 85%
  • Increased their number of confident reads – 440 vs. 483
  • Reduced their number of non-confident reads – 152 vs. 109
  • Improved their diagnostic agreement rates – 0.68-0.79 vs. 0.83-0.97

The Takeaway

Ultromics’ stress echo reports improved the physicians’ interpretation accuracy, confidence, and reproducibility, without increasing false positives. That list of improvements satisfies most of the requirements clinicians have for AI (in addition to speed/efficiency), and it represents another solid example of echo AI’s real-world potential.

Imaging AI’s Unseen Potential

Amid the dozens of imaging AI papers and presentations that came out over the last few weeks were three compelling new studies highlighting how much “unseen” information AI can extract from medical images, and the massive impact this information could have. 

Imaging-Led Population Health – An excellent presentation from Ayis Pyrros, MD placed radiology at the center of healthcare’s transition to value-based care and population health, highlighting the AI training opportunities that will come with more value-based care HCC codes and imaging AI’s untapped potential for early disease detection and management. Dr. Pyrros specifically emphasized chest X-ray’s potential given the exam’s ubiquity (26M Medicare CXRs in 2021), CXR AI’s ability to predict outcomes (e.g. mortality, comorbidities, hospital stays), and how opportunistic AI screening can/should support proactive care that benefits both patients and health systems.

  • Healthcare’s value-based overhaul has traditionally been seen as a threat to radiology’s fee-for-service foundations. Even if that might still be true from a business model perspective, Dr. Pyrros makes it quite clear that the shift to value-based care could make radiology even more important — and importance is always good for business.

AI Race Detection – The final peer-reviewed version of the landmark study showing that AI models can accurately predict patient race was officially published, further confirming that AI can detect patients’ self-reported race by analyzing medical image features. The new paper showed that AI very accurately detects patient race across modalities and anatomical regions (AUCs: CXRs 0.91 – 0.99, chest CT 0.89 – 0.96, mammography 0.81), without relying on proxies or imaging-related confounding features (BMI, disease distribution, and breast density all had ≤0.61 AUCs).

  • If imaging AI models intended for clinical tasks can identify patients’ races, they could be applying the same racial biomarkers to diagnosis, thus reproducing or exacerbating healthcare’s existing racial disparities. That’s an important takeaway whether you’re developing or adopting AI.

CXR Cost Predictions – The smart folks at the UCSF Center for Intelligent Imaging developed a series of CXR-based deep learning models that can predict patients’ future healthcare costs. Developed with 21,872 frontal CXRs from 19,524 patients, the best performing models were able to relatively accurately identify which patients would have a top-50% personal healthcare cost after one, three, and five years (AUCs: 0.806, 0.771, 0.729). 

  • Although predicting which patients will have higher costs could be useful on its own, these findings also suggest that similar CXR-based DL models could be used to flag patients who may deteriorate, initiate proactive care, or support healthcare cost analysis and policies.

AI-Assisted Radiographers

A new European Radiology study provided what might be the first insights into whether AI can allow radiographers to independently read lung cancer screening exams, while alleviating the resource challenges that have slowed LDCT screening program rollouts.

This is the type of study that makes some radiologists uncomfortable, but its results suggest that rads’ role in lung cancer screening remains very secure.

The researchers had two trained UK-based radiographers read 716 LDCT exams using a computer-assisted detection AI solution (158 w/ significant pulmonary nodules), and compared them with interpretations from radiologists who didn’t have CADe assistance.

The radiographers had significantly lower sensitivity than the radiologists (68% & 73.7%; p < 0.001), leading to 61 false negative exams. However, the two CADe-assisted radiographers did achieve:

  • Good sensitivity with cancers confirmed from baseline scans – 83.3% & 100%
  • Relatively high specificity – 92.1% & 92.7%
  • Low false-positive rates – 7.9% and 7.3%

The CADe AI solution might have both helped and hurt the radiographers’ performance, as CADe missed 20 of the radiographers’ 40 false negative nodules, and four of their seven false negative malignant nodules. 

Even as LDCT CADe tools become far more accurate, they might not be able to fill in radiographers’ incidental findings knowledge gap. The radiographers achieved either “good” or “fair” interobserver agreement rates with radiologists for emphysema and CAC findings, but the variety of other incidental pathologies was “too broad to reasonably expect radiographers to detect and interpret.”

The Takeaway
Although CADe-assisted radiographer studies might concern some radiologists, this seems like an important aspect of AI to understand given the workload demands that come with lung cancer screening programs, and the need to better understand how clinicians and AI can work together. 

Good thing for any concerned radiologists, this study shows that LDCT reporting is too complex and current CADe solutions are too limited for CADe-equipped radiographers to independently read LDCTs… “at least for the foreseeable future.”

BAMF & United Imaging’s Precision Medicine Milestone

BAMF Health took a big step in its precision medicine strategy, installing United Imaging’s uEXPLORER total-body PET/CT scanner as it prepares to open its theranostics treatment center. 

Founded in 2018, BAMF Health (Bold Advanced Medical Future) has applied a unique approach to developing advanced treatments, combining the world’s “most advanced” radiopharmacy, its proprietary AI platform, and top molecular imaging technology to deliver hyper-personalized and targeted treatments.

Installing United Imaging’s uEXPLORER total-body PET/CT scanner represents a key final addition to BAMF Health’s precision medicine stack, and makes it the first institution in the US using total-body PET for theranostics. More importantly, the uEXPLORER will allow BAMF Health to deliver more effective and efficient theranostics treatments by:

  • Imaging patients’ entire bodies in a single scan (vs. “eyes to thighs”)
  • Detecting and targeting signs of cancer smaller than two millimeters (vs. 1 cm)
  • Scanning patients in just one minute (vs. up to 1hr)
  • Reducing radiation dosage by up to 40x

BAMF Health’s launch might also represent an early theranostics paradigm shift, highlighting the potential role of private clinics (vs. academic/large institutions) and total-body PET/CT systems (vs. “whole”) with the advanced therapy.

BAMF Health will begin treating patients for prostate cancer and neuroendocrine tumors at its Michigan-based clinic this summer, but plans to deliver a wide range of personalized treatments that extend well beyond cancer in the future (e.g. Alzheimer’s, Parkinson’s, cardiac diseases, endometriosis, chronic pain) and treat patients from around the country.

The Takeaway

Although BAMF Health still has a lot to prove, its upcoming clinical launch might be a key milestone in the evolution of theranostics and molecular imaging.

The Radiologist Skill Gap

A new Stanford study revealed that diagnostic variations are largely due to differences in radiologist skill levels (not work styles/preferences, etc.), suggesting that physician skill gaps might represent a major source of healthcare waste, and warning that efforts to standardize care could lead to even worse results. 

The researchers analyzed 4.67M CXR interpretations from patients with suspected pneumonia, finding that radiologist skill level accounted for 39% of variations in positive diagnoses (both true & false) and 78% of variations in missed diagnoses. Those variations had a major impact on patient care:

  • Reassigning a patient from a radiologist in the 10th to 90th percentile for positive diagnostic rates would increase their probability of receiving a positive diagnosis from 8.9% to 12.3%.
  • Reassigning a patient from a radiologist in the 10th to 90th percentile for missed diagnosis rates would increase their probability of receiving a false negative from 0.2% to 1.8%.

Perhaps counterintuitively, they found that the radiologists who were more likely to diagnose patients with pneumonia were also more likely to submit false negative diagnoses, suggesting that less skilled radiologists are responsible for an outsized share of unnecessary, delayed, and inconsistent care.

Skill can be hard to define, but the researchers found that the “most skilled radiologists” were generally older and more experienced, wrote shorter reports, and spent more time on each report.

The researchers weren’t specifically trying to understand radiologist skill variations with this study, and their main takeaway is that we might have to change our assumptions about how to fix the U.S. healthcare system:

  • Healthcare inefficiency might have more to do with physician performance, and less to do with other commonly cited issues (e.g. misaligned payor/provider incentives) 
  • Relying on standardized approaches to equalize patient care and address cost variations might actually lead to worse care and higher costs

The Takeaway

Most readers probably aren’t surprised to hear that some radiologists are way more accurate than others, and that diagnostic skill increases with age/experience. However, this study gives new evidence supporting the value of quality improvement efforts, and could make it easier to demonstrate how radiology products/processes that reduce variability but don’t generate revenue (like AI…) might deliver clearer ROI than some might think.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!