AI in Radiology: Old Problems, New Tech

Radiology has seen this movie before. Big promises (efficiency, accuracy, burnout relief). Big anxieties (ROI, workflow chaos, pressure to “keep up”). The question isn’t whether AI is powerful. It’s whether we’ve learned how to deploy new technology without repeating the pain of PACS migrations and the EHR era.

The Myth of the Perfect Rollout. Health technology assessment (HTA) sounds great in theory – rigorous, comprehensive, evidence-first. In practice, few organizations have the time, talent, or budget to execute it at scale. 

  • Remember EHRs: adoption happened because policy and money forced it, not because the playbook was tidy. Healthcare’s default pattern is to adopt, then evolve – messy, market-driven, and iterative. Waiting for perfect plans is how you get left behind.

Are AI’s Problems really new?

  • Black box déjà vu. Radiology has long trusted complex, opaque systems (reconstruction algorithms, vendor-specific pipelines). What mattered – and still matters – is validated performance and dependable outputs, not full internal transparency.
  • Model drift ≈ old friends. We’ve always recalibrated clinical tools as populations and scanners change. Monitoring and revalidation are known problems, not alien ones.

What’s Different This Time? Unlike the top-down EHR mandate, AI is largely market-driven. That gives providers agency. 

  • AI solutions must save time, improve outcomes, or avoid costs – not just publish a ROC curve. They must show operational value inside the native radiology workflow.

Fortunately, there are ways to adopt AI and then evolve your processes to make it work…

  • Workflow or bust. Demand in-viewer evidence objects, one-click report insertion, and EHR write-back. If AI adds steps, it subtracts value.
  • Start narrow, scale deliberately. Pick high-volume, high-friction tasks. Prove value in weeks, not years. Expand only when the operational signal is undeniable.
  • Measure what matters. Track operational metrics like seconds saved and coverage (e.g. eligible cases processed before dictation), reliability (e.g. results present before finalization, fail-open behavior), and user friction like context-switching rate and time-to-evidence.
  • Monitor. Stand up organization and site-level performance checks. Treat AI like equipment – scheduled, observed, and maintained.
  • Invest in long-term value. Favor standards, vendor-agnostic interoperability, clear telemetry, and transparent pricing.

The Takeaway

AI’s success in radiology won’t be defined by elegance of algorithms but by pragmatism of deployment. This will be an evolution – hands-on, incremental, sometimes messy. The difference now is that radiology can drive. Make the technology serve the service line – not the other way around.

Target the toughest workflows. Adapt and evolve with Densitas Breast Imaging AI Suite.

AI First Drafts: A New Dawn for Radiology Reporting

For radiologists – the medical detectives who find clues in our medical images – the daily grind can feel like a “death by a thousand cuts.” Much of their time is spent not on diagnosis, but on tedious reporting. 

Now, a new generation of artificial intelligence is stepping in to serve as a high-tech scribe, automating the drudgery.

  • This AI tackles reporting, the most time-consuming part of radiologists’ workflow.

AI-enabled radiology reporting makes transcribing data from technologist worksheets a thing of the past, using Optical Character Recognition (OCR) to decipher everything, even what looks like “chicken scratch handwriting.” Then…

  • A large language model (LLM) applies clinical context to ensure it understands the meaning.
  • It intelligently injects that data into the correct sections of the radiologist’s personal report template.
  • Finally, it performs its own “inference,” like calculating a TI-RADS score and dropping it right into the impression.

Modern AI also learns from a radiologist’s actions, providing a hands-free way to build a report, with features such as…

Smart Measurements: When a lesion is measured, the AI recognizes the location and automatically adds the data and comparisons to prior scans into the report.

Automated Prior Population: Instead of struggling with speech-to-text, the AI notices when a prior study is opened for comparison and automatically populates that exam’s date.

Streamlined Expert Findings: A radiologist can simply state positive findings, and the AI acts as both writer and editor. 

AI-enabled radiology reporting weaves dictated phrases into complete sentences, generates an impression based on clinical guidelines like BI-RADS, and serves as a vigilant proofreader, flagging errors like laterality mistakes or semantic impossibilities. 

As AI technology matures, the software itself is becoming easier to build. The true differentiator is the team behind it. 

  • For radiologists evaluating these new reporting tools, it’s critical to look for teams that are “AI native” – built from the ground up with AI at their core. 

Companies founded on these principles, such as New Lantern, are pioneering these all-in-one radiology reporting solutions, treating the challenge not as a problem to be fixed with another widget, but as an opportunity to build one complete, intelligent platform. 

The Takeaway 

The evolution in AI-enabled radiology reporting isn’t about replacing radiologists; it’s a tool to augment their skills. Radiologists who harness AI to create reports faster will significantly outpace those who do not, allowing them to return their full focus to the art of diagnosis.

Does BMI Affect AI Accuracy?

High body mass index is known to create problems for various medical imaging modalities, from CT to ultrasound. Could it also affect the accuracy of artificial intelligence algorithms? Researchers asked this question as it pertains to lung nodule detection in a new study in European Journal of Radiology

X-ray photons attenuate as they pass through body tissue, which can decrease image quality and produce more noise.

  • This is particularly a challenge for CT exams that don’t use a lot of radiation, like low-dose CT lung screening. 

At the same time, AI algorithms are being developed to make LDCT screening more efficient, such as by identifying and classifying lung nodules.

  • But if high BMI makes CT images noisier, will that affect AI’s performance? Researchers from the Netherlands tested the idea in 352 patients who got LDCT screening as part of the Lifelines study.

Researchers compared patients at both the high end of the BMI spectrum (mean 39.8) and low end (mean 18.7). 

  • Lung nodule detection by both Siemens Healthineers’ AI-Rad Companion Chest CT algorithm and a human radiologist was performed and compared. 

Across the study population, researchers found…

  • There was no statistically significant difference in AI’s sensitivity between high and low BMI groups (0.75 vs. 0.80, p = 0.37). 
  • Nor was there any difference in the human radiologist’s sensitivity (0.76 vs. 0.84, p = 0.17).
  • AI had fewer false positives per scan in the high BMI group than low BMI (0.30 vs. 0.55), a difference that was statistically significant (p = 0.05). 
  • While the difference in false positives with the human radiologist was not statistically significant (0.05 vs. 0.16, p = 0.09).

The study authors attributed AI’s lower performance to more noise in the high BMI scans.

  • They recommended that AI developers include people with both high and low BMI in datasets used for training algorithms.

The Takeaway

The results offer some comfort that patient BMI probably doesn’t have a huge effect on AI performance for nodule detection in lung screening, but it suggests a possible effect that might have achieved statistical significance with a larger sample size. More study in the area is definitely needed given the rising importance of AI for CT lung cancer screening. 

Could States Take Over AI Regulation from the FDA?

Could states take over AI regulation from the FDA as a possible solution to the growing workforce shortage in radiology? It may seem like a wild idea at first, but it’s a question proposed in a special edition of Academic Radiology focusing on radiology and the law. 

Healthcare’s workforce shortage is no secret, and in radiology it’s manifested itself with tight supplies of both radiologists and radiologic technologists. 

  • AI has been touted as a potential solution to lighten the workload, such as by triaging images mostly likely to be normal from requiring immediate radiologist review. 

And autonomous AI – algorithms that operate without human oversight – are already nibbling at radiology’s fringes, with at least one company claiming its solution can produce full radiology reports without human intervention.

  • But the FDA is notoriously conservative when it comes to authorizing new technologies, and AI is no exception. So what’s to stop a state facing a severe radiologist shortage from adopting autonomous AI on its own to help out? 

The new article reviews the legal landscape behind both constitutional and state law, finding examples in which some states have successfully defied federal regulation – such as by legalizing marijuana use – if the issue has broad public support. 

But the authors eventually answer their own question in the negative, stating that it’s not likely states will usurp the FDA’s role regulating AI because…

  • The U.S. Constitution’s Supremacy and Commerce clauses ensure federal law will always supersede state law.
  • If AI made an error, malpractice regulation would be murky given a lack of legal precedent at the state level. 
  • Teleradiologists could opt out of providing care to a state if AI regulations were too burdensome – which could exacerbate the workforce crisis. 

The Takeaway

Ultimately, it’s not likely states will take over AI regulation from the FDA, even if the healthcare workforce shortage worsens significantly. But the Academic Radiology article is an interesting thought experiment that – in an environment in which U.S. healthcare policies have already been turned upside down – may not be so unthinkable after all. 

AI Spots Lung Nodules

A new study in Radiology on an AI algorithm for analyzing lung nodules on CT lung cancer screening exams shows that radiologists may be able to have their cake and eat it too: better identification of malignant nodules with lower false-positive rates. 

The rising utilization of low-dose CT screening is great news for clinicians (and eligible patients), but managing suspicious nodules remains a major challenge, as false-positive findings expose patients to unnecessary biopsies and costs.

  • False-positive rates have come down somewhat from the high rates seen in the big lung cancer screening clinical trials like NLST and NELSON, but there is still room for improvement.

Dutch researchers applied AI to the problem, developing a deep learning algorithm trained on 16.1k NLST nodules that produces a score from 0% to 100% based on a nodule’s likelihood of malignancy. 

  • They then tested the algorithm with baseline screening rounds of 4.1k patients from three datasets drawn from different lung cancer screening trials: NELSON, DLSCT in Denmark, and MILD in Italy.

The algorithm’s performance was compared to the Pan-Canadian Early Detection of Lung Cancer model, a widely used clinical guideline that uses patient characteristics like age and family history and nodule characteristics size and location to estimate risk.

Compared to PanCan, the deep learning algorithm…

  • Reduced false-positive findings sharply by classifying more benign cases as low risk (68% vs. 47%) when set at 100% sensitivity for cancers diagnosed within one year.
  • For all nodules, achieved comparable AUCs at one year (0.98 vs. 0.98), two years (0.96 vs. 0.94), and throughout screening (0.94 vs. 0.93).
  • For indeterminate nodules 5-15 mm, significantly outperformed PanCan at one year (0.95 vs. 0.91), two years (0.94 vs. 0.88), and throughout screening (0.91 vs. 0.86).

The model’s performance for indeterminate nodules is particularly intriguing, as these are challenging to manage due to their small size and can lead to unnecessary follow-up procedures.

The Takeaway

Using AI to differentiate malignant from benign nodules promises to make CT lung cancer screening more accurate and easier to perform than manual nodule classification methods – and should add to the exam’s growing momentum.

Bayer Steps Back from Blackford

Pharmaceutical giant Bayer said it plans to deprioritize its investment in AI platform company Blackford Analysis as part of a general move away from the platform business. Bayer is also winding down its investment in Calantic Digital Solutions, the digital platform company it formed in 2022. 

The move is a stunning turnaround for Blackford, which was founded in 2010 and was the first and perhaps most prominent of the digital AI platform companies. 

  • Bayer acquired Blackford in 2023, and operated it in parallel with Calantic, which also offered AI solutions in the platform format. 

Platform AI companies have a simple value proposition: rather than buy AI algorithms from multiple individual developers, hospitals and imaging facilities contract with a single platform company and pick and choose the solutions they need.

  • It’s a great idea, but platform providers face the same challenges as algorithm developers due to slower-than-expected AI clinical adoption. 

Bayer’s move was confirmed by company representatives, who noted that personnel will be maintained to support the Blackford AI platform and fulfill existing contractual commitments. 

  • “Bayer has made the decision to deprioritize its digital platform business, which includes Blackford, and will discontinue offerings and services. Resources will be reinvested into growth areas that support healthcare institutions around the world, in alignment with customer needs,” the representative said. 

And in a letter to customers obtained by The Imaging Wire, Blackford confirmed Bayer’s decision, stating that Blackford’s core team will remain in place led by COO James Holroyd during the transition. 

  • The company also said it would “discuss and facilitate opportunities to move existing Blackford contracts into direct deals with AI vendors, or alternate platform providers.”

Bayer’s withdrawal from the digital platform space includes the Calantic business, which Bayer formed three years ago to offer internally developed AI tools.

  • At the time, industry experts postulated that contrast agent companies had an inside track for radiology AI thanks to their contracts to supply consumables to customers – a theory that in retrospect hasn’t borne fruit.

Speculation about Blackford’s fate burst into the public eye late last week with a detailed LinkedIn post by healthcare recruiter Jay Gurney, who explained that while Blackford has been successful – and is sitting on a “monster pipeline” of hospital deals – it’s simply not a great fit for a pharmaceutical company. 

  • Despite Bayer’s withdrawal, Blackford could make a good acquisition candidate for a company without a strong AI portfolio that wants to quickly boost its position. 

The Takeaway

Bayer’s announcement that it’s winding down its Blackford and Calantic investments is sure to send shockwaves through the radiology AI industry, which is already struggling with slow clinical adoption and declining venture capital investment. The question is whether a white knight will ride to Blackford’s rescue.

Why Radiology Leaders Are Turning to AI – And Why They’re Not Looking Back

From single-scanner clinics to university hospitals, radiology leaders around the globe face the same challenge: keeping up with rising patient demand while managing costs.

MRI volumes are climbing. Scanner hours and budgets? Not so much.

  • Under pressure to do more with less, decision-makers are reaching a conclusion that was unthinkable just a few years ago: AI-powered MRI is no longer a novelty – it’s a necessity.

No matter the size or scale of the operation, diagnostic imaging providers face a familiar set of challenges:

  • High capital costs – New scanners cost seven figures, and upgrades run hundreds of thousands.
  • Limited capacity – Most sites can’t easily add scanners, staff, or hours to meet demand.
  • Rising demand – MRI volume continues to grow as chronic conditions rise and preventive care gains traction.
  • Patient expectations – Long, uncomfortable exams frustrate patients who may look elsewhere.

AI offers a path forward, helping imaging teams handle more studies without compromising diagnostic standards.

AIRS Medical built SwiftMR, AI-powered MRI reconstruction software, to meet today’s imaging challenges. Hospitals and clinics in over 35 countries use SwiftMR to:

  • Reduce scan times by up to 50% compared to standard protocols.
  • Deliver sharper images radiologists can trust.
  • Enhance the patient experience with shorter exams and fewer motion-related rescans.

SwiftMR is vendor-neutral, compatible with all MRI makes, models, and field strengths.

FDA-cleared, MDR-certified, and clinically validated, SwiftMR is trusted by over 300 imaging providers in the U.S. and over 1,000 globally, including:

These outcomes show that AI-powered MRI delivers tangible operational, clinical, and financial benefits across site types and geographies. 

Watch this video to learn more about SwiftMR.

The Takeaway

Radiology leaders are relying on SwiftMR to transform how they deliver care. From enterprise networks to single-scanner clinics, imaging teams are unlocking new levels of efficiency and patient care.

Lunit Acquires Prognosia Breast Cancer Risk AI

AI developer Lunit is ramping up its position in breast cancer risk prediction by acquiring Prognosia, the developer of a risk prediction algorithm spun out from Washington University School of Medicine in St. Louis. The move will complement Lunit and Volpara’s existing AI models for 2D and 3D mammography analysis. 

Risk prediction has been touted as a better way to determine which women will develop breast cancer in coming years, and high-risk women can be managed more aggressively with more frequent screening intervals or the use of additional imaging modalities.

  • Risk prediction traditionally has relied on models like Tyrer-Cuzick, which is based on clinical factors like patient age, weight, breast density, and family history.

But AI advancements have been leveraged in recent years to develop algorithms that could be more accurate than traditional models.

  • One of these is Prognosia, founded in 2024 based on work conducted by Graham Colditz, MD, DrPH, and Shu (Joy) Jiang, PhD, at Washington University.

Their Prognosia Breast algorithm analyzes subtle differences and changes in 2D and 3D mammograms over time, such as texture, calcification, and breast asymmetry, to generate a score that predicts the risk of developing a new tumor.

Prognosia built on that momentum by submitting a regulatory submission to the FDA, and the application received Breakthrough Device Designation.

  • In conversations with The Imaging Wire, Colditz and Jiang believe AI-based estimates like those of Prognosia Breast will eventually replace the one-size-fits-all model of breast screening, with low-risk women screened less often and high-risk women getting more attention.

Colditz and Jiang are working with the FDA on marketing authorization, and once authorized Prognosia’s algorithm will enter a segment that’s drawing increased attention from AI developers.

  • The two will continue to work with Lunit as it moves Prognosia Breast into the commercialization phase and integrates the product with Lunit’s own offerings like the RiskPathways application in its Lunit Breast Suite and technologies it accessed through its acquisition of Volpara in 2024

The Takeaway

Lunit’s acquisition of Prognosia portends exciting times ahead for breast cancer risk prediction. Armed with tools like Prognosia Breast, clinicians will soon be able to offer mammography screening protocols that are far more tailored to women’s risk profiles than what’s been available in the past. 

Ensemble Mammo AI Combines Competing Algorithms

If one AI algorithm works great for breast cancer screening, would two be even better? That’s the question addressed by a new study that combined two commercially available AI algorithms and applied them in different configurations to help radiologists interpret mammograms.

Mammography AI is emerging as one of the primary use cases for medical AI, understandable given that breast imaging specialists have to sort through thousands of normal cases to find one cancer. 

Most of these studies applied a single AI algorithm to mammograms, but multiple algorithms are available, so why not see how they work together? 

  • This kind of ensemble approach has already been tried with AI for prostate MRI scans – for example in the PI-CAI challenge – but South Korean researchers writing in European Radiology believed it would be a novel approach for mammography.

So they combined two commercially available algorithms – Lunit’s Insight MMG and ScreenPoint Medical’s Transpara – and used them to analyze 3k screening and diagnostic mammograms.

  • Not only did the authors combine competing algorithms, but they adjusted the ensemble’s output to emphasize five different screening parameters, such as sensitivity and specificity, or by having the algorithms assess cases in different sequences.

The authors assessed ensemble AI’s accuracy and ability to reduce workload by triaging cases that didn’t need radiologist review, finding…

  • Outperformed single-algorithm AI’s sensitivity in Sensitive Mode (84% vs. 81%-82%) with an 18% radiologist workload reduction.
  • Outperformed single-algorithm AI’s specificity in Specific Mode (88% vs. 84%-85%) with a 42% workload reduction.
  • Had 82% sensitivity in Conservative Mode but only reduced workload by 9.8%.
  • Saw little difference in sensitivity based on which algorithm read mammograms first (80.3% and 80.8%), but both approaches reduced workload 50%.

The authors suggested that if applied in routine clinical use, ensemble AI could be tailored based on each breast imaging practice’s preferences and where they felt they needed the most help.

The Takeaway

The new results offer an intriguing application of the ensemble AI strategy to mammography screening. Given the plethora of breast AI algorithms available and the rise of platform AI companies that put dozens of solutions at clinicians’ fingertips, it’s not hard to see this approach being put into clinical practice soon.

Unpacking Heartflow IPO’s Lessons for AI Firms

Cardiac AI specialist Heartflow went public last week, and the IPO was a watershed moment for the imaging AI segment. The question is whether Heartflow is blazing a path to be followed by other AI developers or if the company is a shooting star that’s more likely to be admired from afar than emulated.

First the details: Heartflow went public August 8, raising $317M by issuing 16.7M shares at $19 each – and finishing up 50% for the day. 

  • The IPO beat analyst expectations, which originally estimated gross proceeds of $215M, and put the company’s market capitalization at $2.5B – well within the mid-cap stock category. 

So what’s so special about this IPO? Heartflow’s flagship product is FFRCT Analysis, which uses AI-based software to calculate fractional flow reserve – a measure of heart health – from coronary CT angiography scans. 

  • This eliminates the need for an invasive pressure-wire catheter to be threaded into the heart.

Heartflow got an early start in the FFR-CT segment by nabbing FDA clearance for Heartflow FFRCT Analysis in 2014, and since then has been the single most successful AI company in winning reimbursement from both CMS and private payors.

  • In fact, a 2023 analysis of AI reimbursement found that FFRCT Analysis was the top AI product by number of submitted CPT claims, at 67.3k claims – over 4X more than the next product on the list.

That’s created a revenue stream for Heartflow that clearly bucks the myth that clinicians aren’t getting paid for AI.

  • And in an IPO filing with the SEC, Heartflow revealed how reimbursement is driving revenue growth, which was up 44% in 2024 over 2023 ($125.8M vs. $87.2M, respectively). 

But it’s not all sunshine and rainbows at the Mountain View, California company, which posted significant net losses for both 2024 and 2023 ($96.4M and $95.7M).

  • As a public company, Heartflow may have a shorter leash in getting to profitability had it remained privately held.

But the bigger picture is what Heartflow’s IPO means for the imaging AI segment as a whole. 

  • It’s easily the biggest IPO by a pure-play imaging IT vendor in years, and dispels the conventional wisdom that investors are shying away from the sector.

The Takeaway

Heartflow’s IPO shows that in spite of clinical AI’s shortcomings (slow adoption, sluggish reimbursement, etc.), it’s still generating significant investor interest. The company’s focus on achieving both clinical and financial milestones (i.e. reimbursement) should be an example for other AI developers.

Get every issue of The Imaging Wire, delivered right to your inbox.

This content is exclusive to subscribers

Log in or join by entering your email below.

Completely free. Every Monday and Thursday