Agentic AI for Radiology Follow-Up

Agentic AI has quickly become one of the hottest topics in radiology. But what is it really good for? Texas researchers offer one possible use case in a new study in NEJM Catalyst: scouring radiology reports to identify patients who require follow-up. 

Agentic AI is a new flavor of artificial intelligence that’s capable of working autonomously to complete tasks with minimal human supervision.

  • In healthcare, it’s being applied to a wide range of tasks, from improving health system operations to clinical and administrative jobs.

In the current study, researchers from Parkland Health in Dallas assigned agentic AI to one of the trickiest tasks in radiology: making sure patients with suspicious findings comply with recommendations for follow-up procedures.  

  • Previous studies have documented low rates of adherence to radiologist recommendations for follow-up imaging (possibly as low as 50%), creating the uncomfortable possibility of missed opportunities that could have major patient-care ramifications.

The dilemma can be compounded with the use of structured note templates in EHRs, as improper use or modification of these macros can lead to missed notifications. 

  • To address the problem, Parkland clinicians developed an AI agent based on a pretrained open-source large language model (Meta’s LLM Llama 3 70B) that reviews clinical impressions, extracts important details for follow-up, and integrates its findings into departmental workflow to enable patient outreach.

In tests on 10k radiologist notes, Parkland researchers found that their AI agent…

  • Had an overall detection rate of ~5.1%, slightly lower than other published studies (8% to 12%).
  • Had far higher sensitivity than Parkland’s previous macro-based follow-up notification system (99% vs. 16%), correctly flagging 6X more cases (513 vs. 83).
  • Achieved higher accuracy (99% vs. 58%), and 94% accuracy for characterizing follow-up timing, recommended procedure, and underlying abnormality. 

Considering Parkland’s annual volume of 500k imaging studies, the AI agent could identify 21.5k follow-up cases a year. 

  • Many of these could be serious issues, such as new cancer diagnoses or pathologies that require surgical intervention. 

The Takeaway

The new study shows that agentic AI isn’t some technogeek’s far-off dream – it’s a useful tool on the verge of real-world implementation, with the potential to improve patient care without overburdening radiology staff.

Doctors Adopt ‘Shadow AI’ for Efficiency Gains

Doctors under pressure to work more efficiently are looking for help from “shadow AI” – artificial intelligence applications adopted outside a formal hospital approval process. A new survey of U.S. healthcare personnel found that many administrators have encountered unauthorized AI tools in their organizations, including some used for direct patient care. 

U.S. healthcare providers are struggling under rising patient volumes in the midst of an ongoing workforce shortage, a situation that’s leading to burnout among clinicians. 

  • AI is often touted as a possible solution by enabling providers to do more with less, but the jury is still out on whether this works in the real world. 

The new survey was conducted by Wolters Kluwer Health to assess usage of what the report described as “shadow AI,” or AI that’s adopted without proper hospital authorization processes. 

  • Shadow AI introduces risk to data, security, and privacy, and providers should better understand the need for an enterprise approach to AI with appropriate controls.

It’s worth noting that the report’s use of the term “authorization” applies primarily to an institution’s internal approval and governance processes for AI rather than formal FDA regulatory authorization. 

  • AI algorithms that aren’t used for direct patient care don’t require FDA authorization, as the agency pointed out in a guidance just a few weeks ago. 

Researchers surveyed 518 health professionals, finding…

  • 41% were aware of colleagues using unauthorized AI tools.
  • 17% said they had personally used an unauthorized tool.
  • 10% said they had used an unauthorized AI tool for direct patient care.

While the report’s recommendation for stronger AI governance is valid, there could be a competitive subtext to the findings. Wolters Kluwer offers healthcare clinical decision support solutions, and the company is currently locked in a fierce battle with OpenEvidence for dominance in the CDS space.

  • OpenEvidence’s CDS solution is wildly popular with clinicians, many of whom install and consult with the software on their own, outside an enterprise-level governance – exactly the kind of “unauthorized” model the new report criticizes.

The Takeaway

The Wolters Kluwer report could be shedding light on a concerning new trend, or it could represent an effort by an established player to shut out a competitive threat. Either way, its warning on the need for appropriate enterprise-level AI governance should not be ignored.

Canon Celebrates 50 Years of CT Innovation: Redefining Healthcare with Meaningful AI

This year marks a historic milestone for Canon – five decades of pioneering CT innovation that has transformed the landscape of healthcare. From introducing industry-first technologies to setting new standards in diagnostic imaging, Canon continues to lead the way in delivering solutions that matter.

Canon’s legacy is built on breakthroughs such as its three-time award-winning wide-area CT systems, deep learning reconstruction that brings 1K resolution to CT imaging, and automation improving workflow. 

  • These innovations have consistently elevated diagnostic confidence, patient safety, and operational efficiency.

In today’s world, AI is everywhere – but Canon’s AI is Meaningful AI. It’s not about AI for the sake of technology; it’s about creating real-world impact on patient care. 

  • Canon’s portfolio of scanner-integrated AI applications is designed to enhance image quality, streamline workflows, and improve consistency – ultimately delivering better care, better experience, and better efficiency for patients and providers alike.

Canon is redefining CT by making AI a core component across its portfolio. Key innovations include…

  • AI-Assisted Scanner Workflow Automation. Canon’s INSTINX platform introduces intuitive, intelligent, and integrated AI technologies that enable autonomous CT operations. By simplifying complex workflows, INSTINX helps technologists focus on patient care while improving throughput and reducing variability.
  • AI-Assisted Post-Processing. Canon’s Automation Platform offers a zero-click, AI-driven solution that accelerates image post-processing. By delivering fast, actionable insights, this platform ensures time-critical results reach care teams when they need them most.
  • AI-Assisted Reconstruction. Advanced algorithms such as AiCE DLR and PIQE DLR leverage deep learning to reveal critical diagnostic information – contrast and resolution – while optimizing dose efficiency. These tools empower clinicians to make confident diagnoses and reduce the need for additional downstream studies. Additionally, CLEARMotion, a DCNN-based algorithm, compensates for patient motion, reducing blur and delivering high-quality results even in challenging cases.

The Takeaway 

As Canon celebrates 50 years of CT innovation, its commitment remains clear: harnessing AI to make imaging smarter, faster, and more meaningful. With these advancements, Canon is not just shaping the future of CT – it’s setting a new benchmark for patient-centered care.

Next-Generation AI Platform Redefines Radiology Workflow Standards

AI is no longer being viewed as a diagnostic aid but as essential medical infrastructure. Nowhere is that more apparent than in lung screening, with Germany and other European Union countries increasingly embedding AI into their lung cancer screening guidelines and pilot programs.

This evolution will be on display at RSNA 2025, where Coreline Soft will introduce its groundbreaking chest AI platform AVIEW 2.0.

  • The solution demonstrates how unified AI automation is fundamentally transforming radiology workflows and elevating diagnostic precision across pulmonary, cardiac, and airway pathologies.

AVIEW 2.0 represents a paradigm shift from task-specific tools to an integrated diagnostic ecosystem. 

  • The platform seamlessly combines lung-cancer screening (LCS), coronary-artery calcium (CAC) scoring, and COPD quantification into a single, continuous analytical pipeline. 

Clinical validation shows radiologists using AVIEW 2.0 achieve 89% increase in case throughput and 60% reduction in interpretation time compared to the previous generation. 

  • This effectively consolidates multi-disease CT assessment into one streamlined, automated workflow.

AVIEW’s clinical foundation extends far beyond pilot studies. The platform has processed over 2.5M cases across 19 countries, establishing itself as a proven solution in diverse healthcare ecosystems. 

  • Most notably, AVIEW has been selected as the AI platform for major government-led lung cancer screening pilots and programs in Germany, France, and Italy.

Beyond Europe, AVIEW solutions are already integrated into major U.S. medical centers, where their clinical reliability has been independently validated in real-world settings…

  • UMass Memorial Medical Center has deployed the system as an integrated platform for LCS, CAC, and COPD diagnosis, supporting full-spectrum thoracic screening in daily radiology operations.
  • Temple Lung Center, 3DR Labs, and ImageCare Radiology have incorporated AVIEW products into their research and diagnostic environments – each adapting AI functions to site-specific workflows and physician preferences.

SOL Radiology, a fast-growing radiologist-owned practice serving communities across California and Illinois, has deployed AVIEW LCS Plus across its outpatient centers and hospital network, leveraging the platform for high-confidence nodule detection, rapid turnaround, and integrated COPD/CAC assessment. 

  • The group reports significant gains in diagnostic efficiency and consistency within one week of implementation, supporting its vision for technology-driven, high-quality community radiology.

With national-scale validation in Europe, clinical adoption across top-tier U.S. institutions, and 2.5M cases processed globally, Coreline Soft is positioning AVIEW 2.0 as the new benchmark for AI-driven thoracic imaging – where efficiency, accuracy, and scalability converge.

The Takeaway

Coreline Soft will conduct an end-to-end AI workflow demonstration in the “Radiology Reimagined” demo zone at RSNA 2025, using real-world clinical scenarios. With AVIEW and HUB, the full pathway – from triage and interpretation to reporting and quality management – will be validated against standards such as IHE and FHIR, allowing attendees to experience integrated flow firsthand. Learn more or book an appointment on Coreline Soft’s website.

AI First Drafts: A New Dawn for Radiology Reporting

For radiologists – the medical detectives who find clues in our medical images – the daily grind can feel like a “death by a thousand cuts.” Much of their time is spent not on diagnosis, but on tedious reporting. 

Now, a new generation of artificial intelligence is stepping in to serve as a high-tech scribe, automating the drudgery.

  • This AI tackles reporting, the most time-consuming part of radiologists’ workflow.

AI-enabled radiology reporting makes transcribing data from technologist worksheets a thing of the past, using Optical Character Recognition (OCR) to decipher everything, even what looks like “chicken scratch handwriting.” Then…

  • A large language model (LLM) applies clinical context to ensure it understands the meaning.
  • It intelligently injects that data into the correct sections of the radiologist’s personal report template.
  • Finally, it performs its own “inference,” like calculating a TI-RADS score and dropping it right into the impression.

Modern AI also learns from a radiologist’s actions, providing a hands-free way to build a report, with features such as…

Smart Measurements: When a lesion is measured, the AI recognizes the location and automatically adds the data and comparisons to prior scans into the report.

Automated Prior Population: Instead of struggling with speech-to-text, the AI notices when a prior study is opened for comparison and automatically populates that exam’s date.

Streamlined Expert Findings: A radiologist can simply state positive findings, and the AI acts as both writer and editor. 

AI-enabled radiology reporting weaves dictated phrases into complete sentences, generates an impression based on clinical guidelines like BI-RADS, and serves as a vigilant proofreader, flagging errors like laterality mistakes or semantic impossibilities. 

As AI technology matures, the software itself is becoming easier to build. The true differentiator is the team behind it. 

  • For radiologists evaluating these new reporting tools, it’s critical to look for teams that are “AI native” – built from the ground up with AI at their core. 

Companies founded on these principles, such as New Lantern, are pioneering these all-in-one radiology reporting solutions, treating the challenge not as a problem to be fixed with another widget, but as an opportunity to build one complete, intelligent platform. 

The Takeaway 

The evolution in AI-enabled radiology reporting isn’t about replacing radiologists; it’s a tool to augment their skills. Radiologists who harness AI to create reports faster will significantly outpace those who do not, allowing them to return their full focus to the art of diagnosis.

Ensemble Mammo AI Combines Competing Algorithms

If one AI algorithm works great for breast cancer screening, would two be even better? That’s the question addressed by a new study that combined two commercially available AI algorithms and applied them in different configurations to help radiologists interpret mammograms.

Mammography AI is emerging as one of the primary use cases for medical AI, understandable given that breast imaging specialists have to sort through thousands of normal cases to find one cancer. 

Most of these studies applied a single AI algorithm to mammograms, but multiple algorithms are available, so why not see how they work together? 

  • This kind of ensemble approach has already been tried with AI for prostate MRI scans – for example in the PI-CAI challenge – but South Korean researchers writing in European Radiology believed it would be a novel approach for mammography.

So they combined two commercially available algorithms – Lunit’s Insight MMG and ScreenPoint Medical’s Transpara – and used them to analyze 3k screening and diagnostic mammograms.

  • Not only did the authors combine competing algorithms, but they adjusted the ensemble’s output to emphasize five different screening parameters, such as sensitivity and specificity, or by having the algorithms assess cases in different sequences.

The authors assessed ensemble AI’s accuracy and ability to reduce workload by triaging cases that didn’t need radiologist review, finding…

  • Outperformed single-algorithm AI’s sensitivity in Sensitive Mode (84% vs. 81%-82%) with an 18% radiologist workload reduction.
  • Outperformed single-algorithm AI’s specificity in Specific Mode (88% vs. 84%-85%) with a 42% workload reduction.
  • Had 82% sensitivity in Conservative Mode but only reduced workload by 9.8%.
  • Saw little difference in sensitivity based on which algorithm read mammograms first (80.3% and 80.8%), but both approaches reduced workload 50%.

The authors suggested that if applied in routine clinical use, ensemble AI could be tailored based on each breast imaging practice’s preferences and where they felt they needed the most help.

The Takeaway

The new results offer an intriguing application of the ensemble AI strategy to mammography screening. Given the plethora of breast AI algorithms available and the rise of platform AI companies that put dozens of solutions at clinicians’ fingertips, it’s not hard to see this approach being put into clinical practice soon.

AI for Brain MRI

What if you could speed up brain MRI exams by performing fast scans for most patients, and reserving complex sequences for the patients who need them? A hint of that future comes from a new study in which AI showed progress in helping radiologists interpret scans with fewer sequences.

MRI can visualize minute structures in the body, especially in the brain, but it’s one of the trickiest imaging modalities to operate.

  • There’s an alphabet soup of MRI pulse sequences, and the modality’s complexity is multiplied when contrast has to be used. 

Breast MRI experts have been experimenting with abbreviated scanning protocols that speed up image acquisition and interpretation by using fewer and less complex sequences.

  • Researchers applied that concept to MRI brain imaging in a new European Journal of Radiology paper in which they tested Cerebriu’s Apollo AI algorithm with 414 patients from four hospitals in Denmark.

Apollo processes three brain MRI sequences (DWI, SWI or T2* GRE, and T2-FLAIR) and can detect critical findings like brain infarcts and intracranial hemorrhages and tumors while the patient is still on the table.

  • If an abnormality is detected, Apollo prompts technologists to acquire a fourth sequence, such as T1-weighted imaging.

That sounds great, but how well does Apollo work in the real world? 

  • Researchers compared the algorithm’s performance to that of expert neuroradiologists in multiple workflows, such as reading three- and four-sequence MRI scans with and without AI assistance. 

Compared to neuroradiologists using the four-sequence MRI protocol without AI assistance, they found…

  • Apollo’s sensitivity was better than neuroradiologists for brain infarcts (94% vs. 89%) and intracranial tumors (74% vs. 71%) but slightly lower for intracranial hemorrhages (82% vs. 83%).
  • AI’s specificity was somewhat lower, however, for brain infarcts (86% vs. 99%), intracranial hemorrhages (84% vs. 99%), and intracranial tumors (62% vs. 97%). 
  • When neuroradiologists had AI findings in addition to the four-sequence protocol, tumor detection sensitivity improved slightly, but specificity fell. 

While Apollo’s sensitivity was a benefit, the researchers said its low specificity “presents a challenge” and could result in unnecessary additional sequences or contrast administration. 

  • Specificity could be affected by age-related changes in older patients, as well as differences in MRI scanner models used.

The Takeaway

The new findings show that AI-aided MRI scan assistance still needs refinement. But it’s still early days for Cerebriu and Apollo (which has the CE Mark but not FDA clearance), so watch this space for more updates. 

RP Builds AI Mosaic as Company’s IT Foundation

Radiology Partners announced a new initiative to guide the rollout of AI across its nationwide network of radiology practices. The company’s new MosaicOS will be the IT foundation that connects RP practices and supports clinical uses from AI-assisted reporting to report generation and even image management.

Radiology Partners has grown since its founding in 2012 to become the largest privately held provider of imaging services in the U.S. and a major force behind the consolidation of private-practice radiology groups.

  • RP has always maintained a heavy technology investment, and has been looking closely at the rise of AI in radiology.

That’s because the growth in imaging volume is so massive that clinicians will no longer be able to care for patients adequately without AI’s assistance, at least according to RP’s Associate Chief Medical Officer for Clinical AI Nina Kottler, MD.

RP laid the groundwork for MosaicOS in 2020 by first migrating its technology stack to a cloud-native infrastructure. 

  • This frees RP from reliance on on-premises legacy software and enables the company to push out updates that can be adopted quickly across its network.

RP’s Mosaic rollout includes the following components as the company…

  • Forms a new division, Mosaic Clinical Technologies, to oversee its AI activities.
  • Debuts MosaicOS, a cloud-native operating system that combines AI support with workflow and other IT tools.
  • Launches Mosaic Reporting, an automated structured reporting solution that combines ambient voice AI with large language model technology.
  • Develops Mosaic Drafting, a multimodal AI foundation model that pre-drafts X-ray reports that radiologists can review, edit, and sign. 

Mosaic Reporting is already in use at some RP sites, and the company is pursuing FDA clearance for broader use of Mosaic Drafting. More Mosaic applications are on the way.

  • Mosaic tools will be disseminated to RP centers using the cloud-native infrastructure, and MosaicOS will include image management functions that providers can choose to use in place of or alongside existing tools like viewers and archives. 

Kottler told The Imaging Wire that RP has de-emphasized individual pixel-based AI models in favor of foundation models that have broader application.

  • What’s more, RP CEO Rich Whitney said the company has chosen to develop AI technology internally rather than rely on outside vendors, as this gives it greater control over its own AI adoption.

The Takeaway

The launch of MosaicOS marks an exciting milestone not only for Radiology Partners but also for radiology in general that could address nagging concerns about clinical AI adoption on a broad scale. RP has not only the network but also the technology resources to make the rollout a success – the question is whether outside AI developers will share in the rewards.

Radiology AI Approvals Near 1k in New FDA Update

The FDA last week released the long-awaited update to its list of AI-enabled medical devices that have received marketing authorization. The closely watched list shows the number of AI-enabled radiology authorizations approaching the 1k mark.

The FDA has been tracking authorizations of AI-enabled devices going back to 1995, and the list gives industry watchers a feel for not only how quickly the agency is churning out reviews but also which medical specialties are generating the most approvals.

  • But the last time the FDA released an updated list was August 2024, and recent turmoil at the agency had some observers wondering if it would continue the tradition – as well as whether it could stay on pace for new approvals.

Those fears should be assuaged with the new release. The numbers indicate that through May 2025 the FDA has…

  • Granted authorization to 1.2k AI-enabled medical devices since it started tracking.
  • Approved 956 AI-enabled radiology products, or 77% of total medical authorizations.
  • Radiology’s share of overall authorizations from January to May 2025 ticked up to 78% (115/148), compared to 73% in the 2024 update, and 80% in all of 2023.
  • GE HealthCare remains the company with the most radiology AI authorizations, at 96 (including recent acquisitions like Caption Health and MIM Software), with Siemens Healthineers in second place at 80 (including Varian). 
  • Other notable mentions include Philips (42 including DiA Analysis), Canon (35), United Imaging (32), and Aidoc (30). 

In a significant regulatory development, the FDA said it was developing a plan to identify and tag medical devices that use foundation models, including large language models and multimodal architecture. 

  • The agency said the program would help healthcare providers and patients know when LLM-based functionality was included in a medical device (the FDA has yet to approve a medical device with LLM technology). 

In another interesting change, the FDA dropped “machine learning” from the title of its list, apparently with the idea that “AI” was sufficient as an umbrella term. 

The Takeaway

The FDA’s release of its AI approval list is a welcome return to past practices that should reassure agency watchers that recent turmoil isn’t affecting its basic operations. The LLM guidance suggests the agency may be changing its approach to the technology in favor of disclosure and transparency instead of more stringent regulation that could delay some LLM solutions from reaching the market.

AI and Legal Liability in Radiology

What impact will artificial intelligence have on the legal liability of the radiologists who use it? A new study in NEJM AI suggests that medical malpractice juries may pass harsher judgment on radiologists when they make mistakes that disagree with AI findings.

AI is viewed as a technology that can save radiologists time while also helping them make more accurate diagnoses.

  • But there’s a dark side to AI as well – what happens when AI findings aren’t correct, or when radiologists disagree with AI only to discover it was right all along?

In the new study, a research team led by Michael Bernstein, PhD, of Brown University queried 1.3k U.S. adults on their attitudes toward radiologists’ legal liability in two clinical use cases for AI – identifying brain bleeds and detecting lung cancers.

  • Participants were asked if they felt radiologists met their duty of care to patients across different scenarios, such as whether the AI and the radiologist agreed or disagreed on the original diagnosis. 

Responses were compared to a “no AI” control scenario in which respondents assessed legal liability if radiologists hadn’t used AI at all, with researchers finding …

  • If radiologists disagreed with AI, more respondents found radiologists liable …
    • Brain bleeds: 73% found radiologist liable (vs. 50% with no AI)
    • Lung cancer: 79% found radiologist liable (vs. 64% with no AI)
  • If both radiologists and AI missed the diagnosis, there was no statistically significant difference …
    • Brain bleeds: (50% vs. 56% with no AI, p=0.33)
    • Lung cancer: (64% vs. 65% with no AI, p=0.77)
  • Respondents were less likely to side with plaintiffs when given information about standard AI error rates …
    • When AI agreed with the radiologist diagnosis:
      • Brain bleeds: (73% plaintiff agreement fell to 49%)
      • Lung cancer: (79% fell to 73%)
    • When AI disagreed with the radiologist diagnosis:
      • Brain bleeds: (50% plaintiff agreement fell to 34%)
      • Lung cancer: (64% fell to 56%)

The Takeaway

The new study offers a fascinating look at AI’s future in radiology from a medico-legal perspective. But there’s one question the researchers didn’t address: If AI-supported image interpretation eventually becomes the standard of care, will radiologists be found liable for not using it at all? Stay tuned. 

Get every issue of The Imaging Wire, delivered right to your inbox.