AI Predicts Radiology Workload

AI is touted as a tool that can help radiologists lighten their workload. But what if you could use AI to predict when you’ll need help the most? Researchers in Academic Radiology tried that with an AI algorithm that predicted radiology workload based on three key factors. 

Imaging practices are facing pressure from a variety of forces that include rising imaging volume and workforce shortages, with one recent study documenting a sharp workload increase over the past 10 years.

  • Many industry observers believe AI can assist radiologists in reaching faster diagnoses, or by removing studies most likely to be normal from the worklist based on AI analysis. 

But researchers and vendors are also developing AI algorithms for operational use – arguably where radiology practices need the most help.

  • AI can predict equipment utilization, or even create a virtual twin of a radiology facility where administrators can adjust various factors like staffing to visualize their impact on operations.

In the new study, researchers from Mass General Brigham Hospital developed six machine learning algorithms based on a year of imaging exam volumes from two academic medical centers.

The group entered 707 features into the models, but ultimately settled on three main operational factors that best predicted the next weekday’s imaging workload, in particular for outpatient exams…

  • The current number of unread exams.
  • The number of exams scheduled to be performed after 5 p.m.
  • The number of exams scheduled to be performed the next day.

The algorithm’s predictions were put into clinical use with a Tableau dashboard that pulled data from 5 p.m. to 7 a.m. the following day, computed workload predictions, and output its forecast in an online interface they called “BusyBot.”

  • But if you’re only analyzing three factors, do you really need AI to predict the next day’s workload? 

The authors answered this question by comparing the best-performing AI model to estimates made by radiologists from just looking at EHR data. 

  • Humans either underestimated or overestimated the next day’s volume compared to actual numbers, leading the authors to conclude that AI did a better job of calculating dynamics and weighting variables to produce accurate estimates.

The Takeaway

Using AI to predict the next day’s radiology workload is an intriguing twist on the argument that AI can help make radiologists more efficient. Better yet, this use case helps imagers without requiring them to change the way they work. What’s not to like?

AI for Bone Density Screening with X-Ray

Screening women for osteoporosis using AI analysis of chest X-rays acquired for other clinical indications meets U.S. thresholds for cost-effectiveness. That’s according to a new study in JACR that highlights the potential of radiography AI for opportunistic screening.

Osteoporosis screening is already performed using DEXA scanners that detect bone density loss in women.

  • But DEXA scanners aren’t always available, and dedicated screening for just one condition can be expensive. 

Using AI to analyze chest X-rays that women might be getting for other conditions could expand the pool of women being screened for osteoporosis without incurring significant additional costs.

  • Indeed, Japanese researchers recently published a study honing in on the best techniques for AI-enhanced osteoporosis screening with radiography.

In the new study, researchers performed a modeling analysis that simulated the cost-effectiveness of an osteoporosis screening program based on AI-enhanced chest radiographs for U.S. women aged 50 and up. 

  • The cost analysis compared osteoporosis screening plus treatment versus treatment alone, incorporating standard fracture treatment and imaging costs ($66 for DEXA scans, $20 for chest X-rays).

In a sample of 1k women, AI-enhanced X-ray osteoporosis screening…

  • Had an ICER of $72.1k per QALY, below the U.S. cost-effectiveness thresholds of $100k to $150k per QALY.
  • Would produce healthcare savings of $99k, offset by treatment costs of $208k.
  • Would prevent 2.8 fractures and increase QALYs by 1.5.
  • Would remain cost-effective as long as AI’s cost did not exceed $62 per patient.

Adjusting the model’s parameters produced even better performance for AI-based screening. 

  • If medication adherence improved by 50%, the ICER was reduced to $28.6k.

The Takeaway

The new research offers more support for opportunistic osteoporosis screening, this time perhaps from the most important angle of all: cost-effectiveness. If confirmed with other studies, AI-based bone density analysis could make routine chest X-rays even more valuable.

RP Builds AI Mosaic as Company’s IT Foundation

Radiology Partners announced a new initiative to guide the rollout of AI across its nationwide network of radiology practices. The company’s new MosaicOS will be the IT foundation that connects RP practices and supports clinical uses from AI-assisted reporting to report generation and even image management.

Radiology Partners has grown since its founding in 2012 to become the largest privately held provider of imaging services in the U.S. and a major force behind the consolidation of private-practice radiology groups.

  • RP has always maintained a heavy technology investment, and has been looking closely at the rise of AI in radiology.

That’s because the growth in imaging volume is so massive that clinicians will no longer be able to care for patients adequately without AI’s assistance, at least according to RP’s Associate Chief Medical Officer for Clinical AI Nina Kottler, MD.

RP laid the groundwork for MosaicOS in 2020 by first migrating its technology stack to a cloud-native infrastructure. 

  • This frees RP from reliance on on-premises legacy software and enables the company to push out updates that can be adopted quickly across its network.

RP’s Mosaic rollout includes the following components as the company…

  • Forms a new division, Mosaic Clinical Technologies, to oversee its AI activities.
  • Debuts MosaicOS, a cloud-native operating system that combines AI support with workflow and other IT tools.
  • Launches Mosaic Reporting, an automated structured reporting solution that combines ambient voice AI with large language model technology.
  • Develops Mosaic Drafting, a multimodal AI foundation model that pre-drafts X-ray reports that radiologists can review, edit, and sign. 

Mosaic Reporting is already in use at some RP sites, and the company is pursuing FDA clearance for broader use of Mosaic Drafting. More Mosaic applications are on the way.

  • Mosaic tools will be disseminated to RP centers using the cloud-native infrastructure, and MosaicOS will include image management functions that providers can choose to use in place of or alongside existing tools like viewers and archives. 

Kottler told The Imaging Wire that RP has de-emphasized individual pixel-based AI models in favor of foundation models that have broader application.

  • What’s more, RP CEO Rich Whitney said the company has chosen to develop AI technology internally rather than rely on outside vendors, as this gives it greater control over its own AI adoption.

The Takeaway

The launch of MosaicOS marks an exciting milestone not only for Radiology Partners but also for radiology in general that could address nagging concerns about clinical AI adoption on a broad scale. RP has not only the network but also the technology resources to make the rollout a success – the question is whether outside AI developers will share in the rewards.

AI-Driven Lung Cancer Screening and Improving Patient Outcomes

AI is reshaping clinical decision-making, optimizing resource allocation, and enhancing both patient outcomes and experience in CT lung cancer screening. Radiology providers are successfully integrating new AI software tools into hospital operations – supporting diagnostic accuracy and improving patient outcomes.

At the center of this trend is Coreline Soft’s FDA-cleared AVIEW LCS Plus, a 3-in-1 solution capable of detecting lung nodules, quantifying emphysema, and analyzing coronary artery calcification – all from a single low-dose CT scan. 

  • AVIEW LCS Plus is in use at Temple Health, a nationally recognized institution in the U.S. Northeast, where it has allowed providers to streamline clinical workflows from detection to follow-up, delivering measurable improvements in care and ROI.

Coreline Soft will co-host a strategic webinar with the Temple Lung Center on August 1 at 1:30 PM ET, focused on AI-powered lung cancer screening and the evolving paradigm of early detection for chest diseases.

The webinar will offer firsthand insight into how Temple Health is drawing attention as a model for integrating AI beyond diagnosis – transforming it into a scalable, patient-centered care strategy.

The discussion will focus on two main areas…

  • Real-world outcomes: How AI improved diagnostic efficiency, early detection, and comorbidity detection.
  • A deep dive into the precision technology of the AVIEW LCS Plus platform.

AI like Coreline’s is not replacing clinical judgment, but reinforcing it, enhancing radiologists’ ability to detect, triage, and treat lung disease earlier and more efficiently, Criner believes. 

  • The webinar is open to pulmonologists, radiologists, cardiologists, respiratory-adjacent professionals, hospital stakeholders and administrators, and primary care providers across the U.S. and Canada. Interested participants can register for free in advance via the official registration link. 

The Takeaway

AI solutions like Coreline Soft’s AVIEW LCS Plus platform are having a real-world impact on healthcare providers as they roll out CT lung cancer screening programs. Sign up to learn more on August 1.

AI and Legal Liability in Radiology

What impact will artificial intelligence have on the legal liability of the radiologists who use it? A new study in NEJM AI suggests that medical malpractice juries may pass harsher judgment on radiologists when they make mistakes that disagree with AI findings.

AI is viewed as a technology that can save radiologists time while also helping them make more accurate diagnoses.

  • But there’s a dark side to AI as well – what happens when AI findings aren’t correct, or when radiologists disagree with AI only to discover it was right all along?

In the new study, a research team led by Michael Bernstein, PhD, of Brown University queried 1.3k U.S. adults on their attitudes toward radiologists’ legal liability in two clinical use cases for AI – identifying brain bleeds and detecting lung cancers.

  • Participants were asked if they felt radiologists met their duty of care to patients across different scenarios, such as whether the AI and the radiologist agreed or disagreed on the original diagnosis. 

Responses were compared to a “no AI” control scenario in which respondents assessed legal liability if radiologists hadn’t used AI at all, with researchers finding …

  • If radiologists disagreed with AI, more respondents found radiologists liable …
    • Brain bleeds: 73% found radiologist liable (vs. 50% with no AI)
    • Lung cancer: 79% found radiologist liable (vs. 64% with no AI)
  • If both radiologists and AI missed the diagnosis, there was no statistically significant difference …
    • Brain bleeds: (50% vs. 56% with no AI, p=0.33)
    • Lung cancer: (64% vs. 65% with no AI, p=0.77)
  • Respondents were less likely to side with plaintiffs when given information about standard AI error rates …
    • When AI agreed with the radiologist diagnosis:
      • Brain bleeds: (73% plaintiff agreement fell to 49%)
      • Lung cancer: (79% fell to 73%)
    • When AI disagreed with the radiologist diagnosis:
      • Brain bleeds: (50% plaintiff agreement fell to 34%)
      • Lung cancer: (64% fell to 56%)

The Takeaway

The new study offers a fascinating look at AI’s future in radiology from a medico-legal perspective. But there’s one question the researchers didn’t address: If AI-supported image interpretation eventually becomes the standard of care, will radiologists be found liable for not using it at all? Stay tuned. 

All-Star AI for Prostate MRI

An AI model for prostate MRI that combines the best features of five separate algorithms helped radiologists diagnose clinically significant prostate cancer in a new study in JAMA Network Open

The Prostate Imaging-Cancer AI consortium was formed to address a nagging problem in prostate cancer screening.

  • Studies have shown that MRI can reduce biopsies and minimize workup of clinically insignificant disease, but it also has high inter-reader variability and requires a high level of expertise. 

The PI-CAI challenge brought together researchers from multiple countries with a single goal: develop an AI algorithm for prostate MRI that would improve radiologists’ performance.

  • Results were presented at RSNA and ECR conferences, as well as in a 2024 paper in Lancet Oncology that showed that individually the algorithms improved radiologist performance and generated fewer false positives.

But what if you combined the best of the PI-CAI algorithms into a single all-star AI model? 

  • Researchers did just that in the new study, combining the top five algorithms from the PI-CAI challenge into a single AI model in which each algorithm’s results were pooled to create an average detection map indicating the presence of prostate cancer. 

To test the new algorithm, 61 readers from 17 countries interpreted 360 prostate MRI scans with and without the model. 

  • Patients in the test cohort had a median age of 65 years and a median PSA level of 7.0 ng/mL; 34% were eventually diagnosed with clinically significant prostate cancer.

Results of PI-CAI-aided prostate MRI were as follows …

  • Radiologists using the algorithm had higher diagnostic performance than those who didn’t (AUROC=0.92 vs. 0.88).
  • PI-CAI working on its own had the highest performance (AUROC=0.95).
  • Sensitivity improved for cases rated as PI-RADS 3 or higher (97% vs. 94%).
  • Specificity also improved (50% vs. 48%).
  • AI assistance improved the performance of non-expert readers more than expert readers, with greater increases in sensitivity (3.7% vs. 1.5%) and specificity (4.3% vs. 2.8%).

The Takeaway

The new PI-CAI study is an important advance not only for prostate cancer diagnosis but also for the broader AI industry. It points to a future where multiple AI algorithms could be combined to tackle clinical challenges with better diagnostic performance than any model working alone.

Mammo Risk Prediction Improves with AI

Artificial intelligence is beginning to show that it can not only detect breast cancer on mammograms, but it can predict a patient’s future risk of cancer. A new study in JAMA Network Open showed that a U.S. university’s homegrown AI algorithm worked well in predicting breast cancer risk across diverse ethnic groups. 

Breast cancer screening traditionally has used a one-size-fits-all model based on age for determining who gets mammography.

  • But screening might be better tailored to a woman’s risk, which can be calculated from various clinical factors like breast density and family history.

At the same time, research into mammography AI has uncovered an interesting phenomenon – AI algorithms can predict whether a woman will develop breast cancer later in life even if her current mammograms are normal. 

The new study involves a risk prediction algorithm developed at Washington University School of Medicine in St. Louis that uses AI to analyze subtle differences and changes in mammograms over time, including texture, calcification, and breast asymmetry.

  • The algorithm then generates a mammogram risk score that can indicate the risk of developing a new tumor.

In clinical trials in British Columbia, the algorithm was used to analyze full-field digital mammograms of 206.9k women aged 40-74, with up to four years of prior mammograms available. Results were as follows …

  • The algorithm had an AUROC of 0.78 for predicting cancer over the next five years.
  • Performance was higher for women older than 50 compared to 40-50 (AUROC of 0.80 vs. 0.76).
  • Performance was consistent across women of different races.
  • 9% of women had a five-year risk higher than 3%. 

The algorithm’s inclusion of multiple mammography screening rounds is a major advantage over algorithms that use a single mammogram as it can capture changes in the breast over time. 

  • The model also showed consistent performance across ethnic groups, a problem that has befallen other risk prediction algorithms trained mostly on data from White women. 

The Takeaway

The new study advances the field of breast cancer risk prediction with a powerful new approach that supports the concept of more tailored screening. This could make mammography even more effective than the one-size-fits-all approach used for decades.

SIIM 2025 Video Highlights

The annual meeting of the Society for Imaging Informatics in Medicine convened in Portland, Oregon, with members of radiology’s imaging IT community joining together to discuss the latest trends in enterprise imaging, AI, and more. 

As with other recent radiology meetings, AI dominated the discussion at SIIM 2025. But AI’s potential to revolutionize radiology has been tempered by nagging concerns about slow clinical adoption and questionable return on investment for healthcare providers.

Regulatory turbulence is also a concern, highlighted by recent changes implemented by the Trump Administration at the FDA. Some industry observers have speculated that AI approvals have slowed down, while others point out that the FDA – which has lagged other countries in approving new AI algorithms – perhaps might benefit from a fresh approach in how it regulates AI.

The Takeaway 

In the end, SIIM 2025 can be chalked up as another success for the organization. While attendance seemed to be down slightly (most likely due to the West Coast location and pre-Memorial Day timing), the society pointed out that the number of vendor exhibitors at SIIM 2025 exceeded 100 for the first time in years – a sure sign of a healthy imaging IT industry. 

Check out our SIIM 2025 videos below or visit the Shows page on our website, as well as our YouTube and LinkedIn pages, and keep an eye out for our next Imaging Wire newsletter on Thursday.

AI Boosts DBT in Detecting More Breast Cancer

A real-world study of AI for DBT screening found that AI-assisted mammogram interpretation nearly doubled the breast cancer detection rate. Radiologists using iCAD’s ProFound AI software saw sharp improvements across multiple metrics. 

Mammography screening has quickly become one of the most promising use cases for AI. 

  • Multiple large-scale studies published in 2024 and 2025 have documented improved radiologist performance when using AI for mammogram interpretation, with the largest studies performed in Europe.

Another new technology changing mammography screening is digital breast tomosynthesis, which is being rapidly adopted in the U.S. 

  • DBT use in Europe is occurring more slowly, so questions have arisen about whether AI’s benefits for 2D mammography would also be found with 3D systems.

To investigate this question, researchers writing in Clinical Breast Cancer tested radiologist performance for DBT screening before and after implementation of iCAD’s ProFound V2.1 AI algorithm in 2020 at Indiana University. 

  • Interestingly, the pre-AI period included use of iCAD’s older PowerLook CAD software. 

Across the 16.7k DBT cases studied, those with AI saw …

  • A sharp improvement in cancer detection rate per 1k exams (6.1 vs. 3.7).
  • A decline in the abnormal interpretation rate (6.5% vs. 8.2%).
  • Higher PPV1 (rate that abnormal mammograms would be positive) (8.8% vs. 4.2%).
  • Higher PPV3 (rate that biopsies would be positive) (57% vs. 32%). 
  • Higher specificity (94% vs. 92%).
  • No statistically significant change in sensitivity.

The findings on sensitivity are curious given AI’s positive impact on other interpretation metrics.

  • Researchers postulated that there was higher breast cancer incidence in the post-AI implementation period, which could have been caused by AI finding cancers that were missed in the period without AI.

The Takeaway

The radiology world has seen multiple positive studies on AI for mammography, but most of these have come from Europe and involved 2D mammography not DBT. The new results suggest that AI’s benefits will also transfer to DBT, the technology that’s becoming the standard of care for breast screening in the U.S.

How Do Patients Feel about Mammo AI?

As radiology moves (albeit slowly) to adopt clinical AI, how do patients feel about having their images interpreted by a computer? Researchers in a new study in JACR queried patients about their attitudes regarding mammography AI, finding that for the most part the jury is still out. 

Researchers got responses to a 36-question survey from 3.5k patients presenting for breast imaging at eight U.S. practices from 2023-2024, finding …

  • The most common response to four questions on general perceptions of medical AI was “neutral,” with a range of 43-51%. 
  • When asked if using AI for medical tasks was a bad idea, more patients disagreed than agreed (28% vs. 25%). 
  • Regarding confidence that medical AI was safe, patients were more dubious, with higher levels of disagreement (27% vs. 20%).
  • When asked if medical AI was helpful, 43% were neutral but positive attitudes were higher (35% vs. 19%).

The Takeaway

Much like clinicians, patients seem to be taking a wait-and-see attitude toward mammography AI. The new survey does reveal fault lines – like privacy and equitability – that AI developers would do well to address as they work to win broader acceptance for their technology. 

We’re testing a new format today – let us know if you prefer two shorter Top Stories or one longer Top Story with this quick survey!

Get every issue of The Imaging Wire, delivered right to your inbox.