CT Lung Screening News from WCLC 2025

The World Conference on Lung Cancer wrapped up this week in Barcelona, and CT lung cancer screening was a highlighted topic, as it was at WCLC 2024 in San Diego.

The last year has seen significant global progress toward new population-based lung screening programs, and sessions at WCLC 2025 highlighted the advances being made… 

  • A screening program serving Kentucky and Indiana since 2013 has seen a 30-percentage-point decline in late-stage lung cancer diagnoses – over 3.5X faster than national trends – with far higher uptake than national averages (52% vs. 16%).
  • In the European 4-IN-THE-LUNG-RUN trial, AI had a negative predictive value similar to radiologists (98% vs. 97%) in analyzing 2.2k CT lung screen exams, indicating its potential as a first reader.
  • Another 4-IN-THE-LUNG-RUN study of 2.6k individuals revealed that AI had a 2.5% incidental findings rate, with none having acute consequences after a year.
  • The USPSTF’s 2021 guideline expansion may have reduced the number of at-risk individuals eligible for screening. A California analysis of 11.7k lung cancer patients found 8.8% fewer patients were eligible.
  • Researchers from Illinois found that basing screening eligibility on a 20-year smoking history rather than USPSTF 2021’s 20-pack-year threshold would capture more eligible individuals (70% vs. 65%), especially racial minorities.
  • A screening program at a VA healthcare system in Northern California achieved a 94% adherence rate for 3.9k military veterans, with 67% of cancers diagnosed at early stages.
  • U.S. military veterans had much higher screening rates (50% vs. 29%) in an analysis of 413.6k cancer survivors. Among women, 71% were up to date on mammography screening but only 25% were current for lung screens. 
  • Researchers used Qure.ai’s algorithm to detect malignant pulmonary nodules on 198k routine chest X-rays in a tuberculosis screening program.
  • Asian American women are at higher risk of lung cancer – even if they don’t smoke – and a session explored whether they should be screened.
  • A Stanford University program using electronic alerts to primary care physicians boosted screening compliance after one year (16% vs. 8.9%).
  • Attending lung screening didn’t make people feel they had a “license to smoke” in a U.K. study of 87.8k people.
  • Italian researchers tested Coreline Soft’s AVIEW AI solution as a first reader for screening.

The Takeaway

Findings from this week’s WCLC 2025 conference show both the challenges and opportunities in CT lung cancer screening. Researchers around the world are demonstrating that with hard work, dedication, and persistence, lung screening can become an effective, life-saving exam.

Bayer Steps Back from Blackford

Pharmaceutical giant Bayer said it plans to deprioritize its investment in AI platform company Blackford Analysis as part of a general move away from the platform business. Bayer is also winding down its investment in Calantic Digital Solutions, the digital platform company it formed in 2022. 

The move is a stunning turnaround for Blackford, which was founded in 2010 and was the first and perhaps most prominent of the digital AI platform companies. 

  • Bayer acquired Blackford in 2023, and operated it in parallel with Calantic, which also offered AI solutions in the platform format. 

Platform AI companies have a simple value proposition: rather than buy AI algorithms from multiple individual developers, hospitals and imaging facilities contract with a single platform company and pick and choose the solutions they need.

  • It’s a great idea, but platform providers face the same challenges as algorithm developers due to slower-than-expected AI clinical adoption. 

Bayer’s move was confirmed by company representatives, who noted that personnel will be maintained to support the Blackford AI platform and fulfill existing contractual commitments. 

  • “Bayer has made the decision to deprioritize its digital platform business, which includes Blackford, and will discontinue offerings and services. Resources will be reinvested into growth areas that support healthcare institutions around the world, in alignment with customer needs,” the representative said. 

And in a letter to customers obtained by The Imaging Wire, Blackford confirmed Bayer’s decision, stating that Blackford’s core team will remain in place led by COO James Holroyd during the transition. 

  • The company also said it would “discuss and facilitate opportunities to move existing Blackford contracts into direct deals with AI vendors, or alternate platform providers.”

Bayer’s withdrawal from the digital platform space includes the Calantic business, which Bayer formed three years ago to offer internally developed AI tools.

  • At the time, industry experts postulated that contrast agent companies had an inside track for radiology AI thanks to their contracts to supply consumables to customers – a theory that in retrospect hasn’t borne fruit.

Speculation about Blackford’s fate burst into the public eye late last week with a detailed LinkedIn post by healthcare recruiter Jay Gurney, who explained that while Blackford has been successful – and is sitting on a “monster pipeline” of hospital deals – it’s simply not a great fit for a pharmaceutical company. 

  • Despite Bayer’s withdrawal, Blackford could make a good acquisition candidate for a company without a strong AI portfolio that wants to quickly boost its position. 

The Takeaway

Bayer’s announcement that it’s winding down its Blackford and Calantic investments is sure to send shockwaves through the radiology AI industry, which is already struggling with slow clinical adoption and declining venture capital investment. The question is whether a white knight will ride to Blackford’s rescue.

Why Radiology Leaders Are Turning to AI – And Why They’re Not Looking Back

From single-scanner clinics to university hospitals, radiology leaders around the globe face the same challenge: keeping up with rising patient demand while managing costs.

MRI volumes are climbing. Scanner hours and budgets? Not so much.

  • Under pressure to do more with less, decision-makers are reaching a conclusion that was unthinkable just a few years ago: AI-powered MRI is no longer a novelty – it’s a necessity.

No matter the size or scale of the operation, diagnostic imaging providers face a familiar set of challenges:

  • High capital costs – New scanners cost seven figures, and upgrades run hundreds of thousands.
  • Limited capacity – Most sites can’t easily add scanners, staff, or hours to meet demand.
  • Rising demand – MRI volume continues to grow as chronic conditions rise and preventive care gains traction.
  • Patient expectations – Long, uncomfortable exams frustrate patients who may look elsewhere.

AI offers a path forward, helping imaging teams handle more studies without compromising diagnostic standards.

AIRS Medical built SwiftMR, AI-powered MRI reconstruction software, to meet today’s imaging challenges. Hospitals and clinics in over 35 countries use SwiftMR to:

  • Reduce scan times by up to 50% compared to standard protocols.
  • Deliver sharper images radiologists can trust.
  • Enhance the patient experience with shorter exams and fewer motion-related rescans.

SwiftMR is vendor-neutral, compatible with all MRI makes, models, and field strengths.

FDA-cleared, MDR-certified, and clinically validated, SwiftMR is trusted by over 300 imaging providers in the U.S. and over 1,000 globally, including:

These outcomes show that AI-powered MRI delivers tangible operational, clinical, and financial benefits across site types and geographies. 

Watch this video to learn more about SwiftMR.

The Takeaway

Radiology leaders are relying on SwiftMR to transform how they deliver care. From enterprise networks to single-scanner clinics, imaging teams are unlocking new levels of efficiency and patient care.

Lunit Acquires Prognosia Breast Cancer Risk AI

AI developer Lunit is ramping up its position in breast cancer risk prediction by acquiring Prognosia, the developer of a risk prediction algorithm spun out from Washington University School of Medicine in St. Louis. The move will complement Lunit and Volpara’s existing AI models for 2D and 3D mammography analysis. 

Risk prediction has been touted as a better way to determine which women will develop breast cancer in coming years, and high-risk women can be managed more aggressively with more frequent screening intervals or the use of additional imaging modalities.

  • Risk prediction traditionally has relied on models like Tyrer-Cuzick, which is based on clinical factors like patient age, weight, breast density, and family history.

But AI advancements have been leveraged in recent years to develop algorithms that could be more accurate than traditional models.

  • One of these is Prognosia, founded in 2024 based on work conducted by Graham Colditz, MD, DrPH, and Shu (Joy) Jiang, PhD, at Washington University.

Their Prognosia Breast algorithm analyzes subtle differences and changes in 2D and 3D mammograms over time, such as texture, calcification, and breast asymmetry, to generate a score that predicts the risk of developing a new tumor.

Prognosia built on that momentum by submitting a regulatory submission to the FDA, and the application received Breakthrough Device Designation.

  • In conversations with The Imaging Wire, Colditz and Jiang believe AI-based estimates like those of Prognosia Breast will eventually replace the one-size-fits-all model of breast screening, with low-risk women screened less often and high-risk women getting more attention.

Colditz and Jiang are working with the FDA on marketing authorization, and once authorized Prognosia’s algorithm will enter a segment that’s drawing increased attention from AI developers.

  • The two will continue to work with Lunit as it moves Prognosia Breast into the commercialization phase and integrates the product with Lunit’s own offerings like the RiskPathways application in its Lunit Breast Suite and technologies it accessed through its acquisition of Volpara in 2024

The Takeaway

Lunit’s acquisition of Prognosia portends exciting times ahead for breast cancer risk prediction. Armed with tools like Prognosia Breast, clinicians will soon be able to offer mammography screening protocols that are far more tailored to women’s risk profiles than what’s been available in the past. 

Cardiac CT’s Long-Term PROMISE

Coronary CT angiography works just as well as traditional stress testing over the long haul for patients with stable symptoms of coronary artery disease. That’s according to the latest follow-up data from the PROMISE study in JAMA Cardiology, which found no difference in mortality between either strategy. 

PROMISE was a randomized controlled trial that compared patient work-up with anatomical CCTA scans to functional stress testing (exercise ECG, stress echo, or stress nuclear) in 10k patients from 2010 to 2014. 

  • The first PROMISE results found that in patients with CAD symptoms who were followed up for just over two years, there was little difference between anatomical CCTA and functional stress testing for endpoints like death, myocardial infarction, or other complications.

But what about over a longer follow-up period? The new results extend PROMISE’s follow-up to a median of 10.6 years, finding… 

  • Mortality rates were largely the same whether patients got CCTA or stress testing (14.3% vs. 14.5%, p = 0.56). 
  • Cardiovascular mortality rates were also similar (4.0% vs. 4.3%, p = 0.77).
  • As were noncardiovascular death rates (10.7% for both).

There were some differences in the predictive power of each modality based on patient characteristics…

  • With CCTA, any abnormal finding increased a patient’s mortality risk compared to normal findings for severe, moderate, and mild disease (HR = 3.44, 3.38, and 1.99, respectively).
  • With stress testing, only patients with severely abnormal disease had higher mortality risk (HR = 1.45).

The new PROMISE data also tracks well with recent 10-year findings from SCOT-HEART, another major study that demonstrated CCTA’s value.

  • Combining results from PROMISE and SCOT-HEART shows 89% survival of patients with stable angina at 12 years, demonstrating good effectiveness regardless of workup strategy.

The Takeaway

PROMISE findings have gone a long way toward showing that CCTA is every bit as effective as stress testing, and the new results reinforce this message. The findings are also good news for radiology, which has a stronger hold over anatomical imaging with CT than it does over the predominant stress modalities, which are largely controlled by cardiology.

Ensemble Mammo AI Combines Competing Algorithms

If one AI algorithm works great for breast cancer screening, would two be even better? That’s the question addressed by a new study that combined two commercially available AI algorithms and applied them in different configurations to help radiologists interpret mammograms.

Mammography AI is emerging as one of the primary use cases for medical AI, understandable given that breast imaging specialists have to sort through thousands of normal cases to find one cancer. 

Most of these studies applied a single AI algorithm to mammograms, but multiple algorithms are available, so why not see how they work together? 

  • This kind of ensemble approach has already been tried with AI for prostate MRI scans – for example in the PI-CAI challenge – but South Korean researchers writing in European Radiology believed it would be a novel approach for mammography.

So they combined two commercially available algorithms – Lunit’s Insight MMG and ScreenPoint Medical’s Transpara – and used them to analyze 3k screening and diagnostic mammograms.

  • Not only did the authors combine competing algorithms, but they adjusted the ensemble’s output to emphasize five different screening parameters, such as sensitivity and specificity, or by having the algorithms assess cases in different sequences.

The authors assessed ensemble AI’s accuracy and ability to reduce workload by triaging cases that didn’t need radiologist review, finding…

  • Outperformed single-algorithm AI’s sensitivity in Sensitive Mode (84% vs. 81%-82%) with an 18% radiologist workload reduction.
  • Outperformed single-algorithm AI’s specificity in Specific Mode (88% vs. 84%-85%) with a 42% workload reduction.
  • Had 82% sensitivity in Conservative Mode but only reduced workload by 9.8%.
  • Saw little difference in sensitivity based on which algorithm read mammograms first (80.3% and 80.8%), but both approaches reduced workload 50%.

The authors suggested that if applied in routine clinical use, ensemble AI could be tailored based on each breast imaging practice’s preferences and where they felt they needed the most help.

The Takeaway

The new results offer an intriguing application of the ensemble AI strategy to mammography screening. Given the plethora of breast AI algorithms available and the rise of platform AI companies that put dozens of solutions at clinicians’ fingertips, it’s not hard to see this approach being put into clinical practice soon.

AI for Brain MRI

What if you could speed up brain MRI exams by performing fast scans for most patients, and reserving complex sequences for the patients who need them? A hint of that future comes from a new study in which AI showed progress in helping radiologists interpret scans with fewer sequences.

MRI can visualize minute structures in the body, especially in the brain, but it’s one of the trickiest imaging modalities to operate.

  • There’s an alphabet soup of MRI pulse sequences, and the modality’s complexity is multiplied when contrast has to be used. 

Breast MRI experts have been experimenting with abbreviated scanning protocols that speed up image acquisition and interpretation by using fewer and less complex sequences.

  • Researchers applied that concept to MRI brain imaging in a new European Journal of Radiology paper in which they tested Cerebriu’s Apollo AI algorithm with 414 patients from four hospitals in Denmark.

Apollo processes three brain MRI sequences (DWI, SWI or T2* GRE, and T2-FLAIR) and can detect critical findings like brain infarcts and intracranial hemorrhages and tumors while the patient is still on the table.

  • If an abnormality is detected, Apollo prompts technologists to acquire a fourth sequence, such as T1-weighted imaging.

That sounds great, but how well does Apollo work in the real world? 

  • Researchers compared the algorithm’s performance to that of expert neuroradiologists in multiple workflows, such as reading three- and four-sequence MRI scans with and without AI assistance. 

Compared to neuroradiologists using the four-sequence MRI protocol without AI assistance, they found…

  • Apollo’s sensitivity was better than neuroradiologists for brain infarcts (94% vs. 89%) and intracranial tumors (74% vs. 71%) but slightly lower for intracranial hemorrhages (82% vs. 83%).
  • AI’s specificity was somewhat lower, however, for brain infarcts (86% vs. 99%), intracranial hemorrhages (84% vs. 99%), and intracranial tumors (62% vs. 97%). 
  • When neuroradiologists had AI findings in addition to the four-sequence protocol, tumor detection sensitivity improved slightly, but specificity fell. 

While Apollo’s sensitivity was a benefit, the researchers said its low specificity “presents a challenge” and could result in unnecessary additional sequences or contrast administration. 

  • Specificity could be affected by age-related changes in older patients, as well as differences in MRI scanner models used.

The Takeaway

The new findings show that AI-aided MRI scan assistance still needs refinement. But it’s still early days for Cerebriu and Apollo (which has the CE Mark but not FDA clearance), so watch this space for more updates. 

MRI of Bullet Fragments Is Possible

Radiology has a renewed focus on MRI safety following the tragic death of a New York man in an MRI accident last month. With that in mind, a new JACR study looks at adverse MRI events caused by an uncommon but still important phenomenon: retained bullet fragments in patients getting scans. 

MRI is radiology’s most powerful modality, but its strong magnetic fields can be hazardous – and on extremely rare occasions even fatal – for both patients and medical personnel.

  • Patients are supposed to be screened for metallic implants, jewelry, and other contraindications, but how often do providers know to ask about retained bullet fragments?

Having a retained bullet fragment on its own isn’t a contraindication for MRI, but providers do need to know where fragments are located and how large they are.

  • If pre-scan screening discovers a patient with a retained fragment, they typically receive X-rays of the involved area to determine location and size – scans should be aborted if the fragment is in a solid organ or within 5 mm of an important artery or vein.

If all these steps are taken and the scan goes ahead, how often do adverse MRI events occur? 

  • Researchers reviewed 6.1k X-ray reports that contained the terms “bullet” or “shrapnel” over 13 years, finding 284 patients who got an MRI scan after a retained fragment was found on radiography.

They found…

  • Only four patients (1.8%) experienced symptoms during MRI scans.
  • Each of the exams was terminated early due to patient discomfort, with three patients reporting burning and one general discomfort.
  • None of the symptomatic exams had the bullet in the MRI field of view.
  • No serious injury and no follow-up care was required. 

The Takeaway

The new findings are encouraging by showing that with careful patient screening and monitoring, MRI scans can be performed on patients with retained bullet fragments. But as always, MRI operators must remain vigilant and adhere to published MRI safety guidelines.

Unpacking Heartflow IPO’s Lessons for AI Firms

Cardiac AI specialist Heartflow went public last week, and the IPO was a watershed moment for the imaging AI segment. The question is whether Heartflow is blazing a path to be followed by other AI developers or if the company is a shooting star that’s more likely to be admired from afar than emulated.

First the details: Heartflow went public August 8, raising $317M by issuing 16.7M shares at $19 each – and finishing up 50% for the day. 

  • The IPO beat analyst expectations, which originally estimated gross proceeds of $215M, and put the company’s market capitalization at $2.5B – well within the mid-cap stock category. 

So what’s so special about this IPO? Heartflow’s flagship product is FFRCT Analysis, which uses AI-based software to calculate fractional flow reserve – a measure of heart health – from coronary CT angiography scans. 

  • This eliminates the need for an invasive pressure-wire catheter to be threaded into the heart.

Heartflow got an early start in the FFR-CT segment by nabbing FDA clearance for Heartflow FFRCT Analysis in 2014, and since then has been the single most successful AI company in winning reimbursement from both CMS and private payors.

  • In fact, a 2023 analysis of AI reimbursement found that FFRCT Analysis was the top AI product by number of submitted CPT claims, at 67.3k claims – over 4X more than the next product on the list.

That’s created a revenue stream for Heartflow that clearly bucks the myth that clinicians aren’t getting paid for AI.

  • And in an IPO filing with the SEC, Heartflow revealed how reimbursement is driving revenue growth, which was up 44% in 2024 over 2023 ($125.8M vs. $87.2M, respectively). 

But it’s not all sunshine and rainbows at the Mountain View, California company, which posted significant net losses for both 2024 and 2023 ($96.4M and $95.7M).

  • As a public company, Heartflow may have a shorter leash in getting to profitability had it remained privately held.

But the bigger picture is what Heartflow’s IPO means for the imaging AI segment as a whole. 

  • It’s easily the biggest IPO by a pure-play imaging IT vendor in years, and dispels the conventional wisdom that investors are shying away from the sector.

The Takeaway

Heartflow’s IPO shows that in spite of clinical AI’s shortcomings (slow adoption, sluggish reimbursement, etc.), it’s still generating significant investor interest. The company’s focus on achieving both clinical and financial milestones (i.e. reimbursement) should be an example for other AI developers.

AI Predicts Radiology Workload

AI is touted as a tool that can help radiologists lighten their workload. But what if you could use AI to predict when you’ll need help the most? Researchers in Academic Radiology tried that with an AI algorithm that predicted radiology workload based on three key factors. 

Imaging practices are facing pressure from a variety of forces that include rising imaging volume and workforce shortages, with one recent study documenting a sharp workload increase over the past 10 years.

  • Many industry observers believe AI can assist radiologists in reaching faster diagnoses, or by removing studies most likely to be normal from the worklist based on AI analysis. 

But researchers and vendors are also developing AI algorithms for operational use – arguably where radiology practices need the most help.

  • AI can predict equipment utilization, or even create a virtual twin of a radiology facility where administrators can adjust various factors like staffing to visualize their impact on operations.

In the new study, researchers from Mass General Brigham Hospital developed six machine learning algorithms based on a year of imaging exam volumes from two academic medical centers.

The group entered 707 features into the models, but ultimately settled on three main operational factors that best predicted the next weekday’s imaging workload, in particular for outpatient exams…

  • The current number of unread exams.
  • The number of exams scheduled to be performed after 5 p.m.
  • The number of exams scheduled to be performed the next day.

The algorithm’s predictions were put into clinical use with a Tableau dashboard that pulled data from 5 p.m. to 7 a.m. the following day, computed workload predictions, and output its forecast in an online interface they called “BusyBot.”

  • But if you’re only analyzing three factors, do you really need AI to predict the next day’s workload? 

The authors answered this question by comparing the best-performing AI model to estimates made by radiologists from just looking at EHR data. 

  • Humans either underestimated or overestimated the next day’s volume compared to actual numbers, leading the authors to conclude that AI did a better job of calculating dynamics and weighting variables to produce accurate estimates.

The Takeaway

Using AI to predict the next day’s radiology workload is an intriguing twist on the argument that AI can help make radiologists more efficient. Better yet, this use case helps imagers without requiring them to change the way they work. What’s not to like?

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!