Doctors Adopt ‘Shadow AI’ for Efficiency Gains

Doctors under pressure to work more efficiently are looking for help from “shadow AI” – artificial intelligence applications adopted outside a formal hospital approval process. A new survey of U.S. healthcare personnel found that many administrators have encountered unauthorized AI tools in their organizations, including some used for direct patient care. 

U.S. healthcare providers are struggling under rising patient volumes in the midst of an ongoing workforce shortage, a situation that’s leading to burnout among clinicians. 

  • AI is often touted as a possible solution by enabling providers to do more with less, but the jury is still out on whether this works in the real world. 

The new survey was conducted by Wolters Kluwer Health to assess usage of what the report described as “shadow AI,” or AI that’s adopted without proper hospital authorization processes. 

  • Shadow AI introduces risk to data, security, and privacy, and providers should better understand the need for an enterprise approach to AI with appropriate controls.

It’s worth noting that the report’s use of the term “authorization” applies primarily to an institution’s internal approval and governance processes for AI rather than formal FDA regulatory authorization. 

  • AI algorithms that aren’t used for direct patient care don’t require FDA authorization, as the agency pointed out in a guidance just a few weeks ago. 

Researchers surveyed 518 health professionals, finding…

  • 41% were aware of colleagues using unauthorized AI tools.
  • 17% said they had personally used an unauthorized tool.
  • 10% said they had used an unauthorized AI tool for direct patient care.

While the report’s recommendation for stronger AI governance is valid, there could be a competitive subtext to the findings. Wolters Kluwer offers healthcare clinical decision support solutions, and the company is currently locked in a fierce battle with OpenEvidence for dominance in the CDS space.

  • OpenEvidence’s CDS solution is wildly popular with clinicians, many of whom install and consult with the software on their own, outside an enterprise-level governance – exactly the kind of “unauthorized” model the new report criticizes.

The Takeaway

The Wolters Kluwer report could be shedding light on a concerning new trend, or it could represent an effort by an established player to shut out a competitive threat. Either way, its warning on the need for appropriate enterprise-level AI governance should not be ignored.

Canon Celebrates 50 Years of CT Innovation: Redefining Healthcare with Meaningful AI

This year marks a historic milestone for Canon – five decades of pioneering CT innovation that has transformed the landscape of healthcare. From introducing industry-first technologies to setting new standards in diagnostic imaging, Canon continues to lead the way in delivering solutions that matter.

Canon’s legacy is built on breakthroughs such as its three-time award-winning wide-area CT systems, deep learning reconstruction that brings 1K resolution to CT imaging, and automation improving workflow. 

  • These innovations have consistently elevated diagnostic confidence, patient safety, and operational efficiency.

In today’s world, AI is everywhere – but Canon’s AI is Meaningful AI. It’s not about AI for the sake of technology; it’s about creating real-world impact on patient care. 

  • Canon’s portfolio of scanner-integrated AI applications is designed to enhance image quality, streamline workflows, and improve consistency – ultimately delivering better care, better experience, and better efficiency for patients and providers alike.

Canon is redefining CT by making AI a core component across its portfolio. Key innovations include…

  • AI-Assisted Scanner Workflow Automation. Canon’s INSTINX platform introduces intuitive, intelligent, and integrated AI technologies that enable autonomous CT operations. By simplifying complex workflows, INSTINX helps technologists focus on patient care while improving throughput and reducing variability.
  • AI-Assisted Post-Processing. Canon’s Automation Platform offers a zero-click, AI-driven solution that accelerates image post-processing. By delivering fast, actionable insights, this platform ensures time-critical results reach care teams when they need them most.
  • AI-Assisted Reconstruction. Advanced algorithms such as AiCE DLR and PIQE DLR leverage deep learning to reveal critical diagnostic information – contrast and resolution – while optimizing dose efficiency. These tools empower clinicians to make confident diagnoses and reduce the need for additional downstream studies. Additionally, CLEARMotion, a DCNN-based algorithm, compensates for patient motion, reducing blur and delivering high-quality results even in challenging cases.

The Takeaway 

As Canon celebrates 50 years of CT innovation, its commitment remains clear: harnessing AI to make imaging smarter, faster, and more meaningful. With these advancements, Canon is not just shaping the future of CT – it’s setting a new benchmark for patient-centered care.

Next-Generation AI Platform Redefines Radiology Workflow Standards

AI is no longer being viewed as a diagnostic aid but as essential medical infrastructure. Nowhere is that more apparent than in lung screening, with Germany and other European Union countries increasingly embedding AI into their lung cancer screening guidelines and pilot programs.

This evolution will be on display at RSNA 2025, where Coreline Soft will introduce its groundbreaking chest AI platform AVIEW 2.0.

  • The solution demonstrates how unified AI automation is fundamentally transforming radiology workflows and elevating diagnostic precision across pulmonary, cardiac, and airway pathologies.

AVIEW 2.0 represents a paradigm shift from task-specific tools to an integrated diagnostic ecosystem. 

  • The platform seamlessly combines lung-cancer screening (LCS), coronary-artery calcium (CAC) scoring, and COPD quantification into a single, continuous analytical pipeline. 

Clinical validation shows radiologists using AVIEW 2.0 achieve 89% increase in case throughput and 60% reduction in interpretation time compared to the previous generation. 

  • This effectively consolidates multi-disease CT assessment into one streamlined, automated workflow.

AVIEW’s clinical foundation extends far beyond pilot studies. The platform has processed over 2.5M cases across 19 countries, establishing itself as a proven solution in diverse healthcare ecosystems. 

  • Most notably, AVIEW has been selected as the AI platform for major government-led lung cancer screening pilots and programs in Germany, France, and Italy.

Beyond Europe, AVIEW solutions are already integrated into major U.S. medical centers, where their clinical reliability has been independently validated in real-world settings…

  • UMass Memorial Medical Center has deployed the system as an integrated platform for LCS, CAC, and COPD diagnosis, supporting full-spectrum thoracic screening in daily radiology operations.
  • Temple Lung Center, 3DR Labs, and ImageCare Radiology have incorporated AVIEW products into their research and diagnostic environments – each adapting AI functions to site-specific workflows and physician preferences.

SOL Radiology, a fast-growing radiologist-owned practice serving communities across California and Illinois, has deployed AVIEW LCS Plus across its outpatient centers and hospital network, leveraging the platform for high-confidence nodule detection, rapid turnaround, and integrated COPD/CAC assessment. 

  • The group reports significant gains in diagnostic efficiency and consistency within one week of implementation, supporting its vision for technology-driven, high-quality community radiology.

With national-scale validation in Europe, clinical adoption across top-tier U.S. institutions, and 2.5M cases processed globally, Coreline Soft is positioning AVIEW 2.0 as the new benchmark for AI-driven thoracic imaging – where efficiency, accuracy, and scalability converge.

The Takeaway

Coreline Soft will conduct an end-to-end AI workflow demonstration in the “Radiology Reimagined” demo zone at RSNA 2025, using real-world clinical scenarios. With AVIEW and HUB, the full pathway – from triage and interpretation to reporting and quality management – will be validated against standards such as IHE and FHIR, allowing attendees to experience integrated flow firsthand. Learn more or book an appointment on Coreline Soft’s website.

RP Acquires Vision AI Firm Cognita Imaging

Radiology Partners ramped up its investment in AI by acquiring Cognita Imaging, a startup that’s developed AI vision language models for analyzing CT and X-ray images and drafting initial radiology reports. RP executives see the acquisition as going beyond traditional point-source AI models and toward a future where AI automates much of the traditional image interpretation process.

The $80M acquisition expands on an equity stake RP already had in Cognita, which had been operating in stealth mode since its spin-off from Stanford University’s Center for Artificial Intelligence in Medicine and Imaging lab.

  • Cognita was formed by a team led by CEO Louis Blankemeier, PhD, to commercialize Stanford research on vision language models, a type of generative AI that’s far more versatile than the traditional point-source models being commercialized to analyze medical images.

Instead, Cognita’s technology is able to analyze text as well as CT or X-ray images and produce first drafts of radiology reports that just need a radiologist’s review and signature to be complete.

  • Extremely positive clinical tests with Cognita’s VLM models spurred RP to acquire the rest of the company it didn’t already own, said Rich Whitney, chairman and CEO of Radiology Partners. 

Cognita’s technology powers Mosaic Drafting, RP’s new application for helping radiologists draft reports that operates under the company’s recently launched Mosaic Clinical Technologies branding. Early clinical testing has found that Mosaic Drafting…

  • Increases radiologist detection rates by 52%.
  • Results in a fourfold decline in radiologist errors.
  • Reduces radiologist reading times by up to 76%.

RP plans to deploy Mosaic Drafting through Mosaic Clinical Technologies, which the company launched in July as the technological foundation for a massive rollout of AI across its physician practices. 

  • Mosaic Chief Medical AI Officer Nina Kottler, MD, said Mosaic Drafting is currently being used within Radiology Partners under IRB approval, but the company will pursue an FDA authorization – most likely under a de novo pathway – that probably will come sometime in 2026.

In a broader sense, RP sees Mosaic Drafting and other VLM tools as key to the growing mismatch between rising imaging volume and stagnant radiologist supply – a mismatch that can only be solved through greater automation. 

  • And as the largest private radiology organization in the U.S., Radiology Partners has the organizational heft to make VLMs work on a wide scale.

The Takeaway 

RP’s acquisition of Cognita is a major development in putting vision language models on the fast track to real-world clinical use. Unlike point-source AI, VLMs could hold the key to really solving radiology’s volume overload dilemma.

An All-in-One Radiology Platform Built for the AI Era

Early in the COVID pandemic, software engineer Shiva Suri found himself working from home alongside his radiologist mother in his parents’ basement. What he saw would lead him to build New Lantern, an AI-native platform set to disrupt the legacy radiology software market.

Suri witnessed his “world-class radiologist” mom wasting far too much time switching between five different PACS platforms and repeating the same cumbersome reporting processes with each case.

“I thought a radiologist’s job was supposed to be playing Sherlock Holmes in images,” Suri recalls, “not constantly mouse-clicking all over their PACS and tab-dictating endlessly in their reporting software.”

That imperfect workflow is an unfortunate reality for today’s radiologists, who’ve seen their processes become more tedious, while their caseloads grow in both volume and complexity.

Rads Don’t Need Another Widget

Suri’s time spent working from home became the foundation for New Lantern’s bold mission:  keep radiologists’ eyes on their images and let AI do the rest. 

  • That mission evolved over time, as Suri’s first attempt at solving radiology’s efficiency problem was a widget to automate report impressions.
  • Radiologists loved it, but… each wave of praise came with requests for more automation, leading Suri to realize that radiology’s problems weren’t going to be solved with another widget. The solution had to be fundamentally different.

The Time Is Right for an All-in-One Solution

Developing radiology’s go-to reading and reporting platform had to start with radiologists’ dream state, with their eyes on the viewer, reading image after image. 

  • It had to be based on the understanding that this dream can’t be achieved while radiologists are navigating a loosely integrated software stack.
  • The good news is, now is the perfect time to solve radiology’s software problem. The radiologist shortage and surging imaging volumes are finally driving radiology practices to look for new tech partners, and the emergence of generative AI is allowing startups to gain traction in segments that have long been dominated by entrenched legacy players. 

Enter New Lantern Curie

This perfectly timed mix of tech and market readiness set the stage for Curie, New Lantern’s all-in-one platform that combines a smart worklist, cloud PACS viewer, and AI reporter to produce AI-automated radiology report drafts.

Radiology report automation is no small task, and there’s a lot that goes into Curie’s ability to automate over 75% of non-diagnostic radiology work…

  • Streamlined Dictation – Radiologists free-dictate positive findings (no punctuation or commands), and the AI weaves them into complete sentences, generates guideline-based impressions (calculating BI-RADS, etc.), and flags errors.
  • No Tech Translations – Curie uses OCR technology to decipher technologist worksheets, applies clinical context via an LLM, and intelligently places data in the right report sections.
  • Remove Repetition – Radiologists no longer need to dictate measurements or enter prior dates. Curie handles these and a long list of other duplicative tasks for them.

The Numbers Tell the Story

All of these automations really add up, giving radiologists over 100 minutes back per shift, so they can get more done and get their lives back.

Here’s one real-world example presented at SIIM 2025 of a radiologist’s process for reading a pulmonary embolism CTA chest exam, before and after Curie…

  • Words dictated — 205 vs. 57
  • Punctuation marks & commands — 19 vs. 0
  • Fields navigated — 32 vs. 1
  • Metadata entries — 8 vs. 0 

In this example, Curie produced the same complete, accurate report with 72% fewer dictated words and 97% less navigation through dictation fields and hanging protocol changes. That’s one type of “AI taking radiologists’ jobs” that just about every radiologist would welcome.

The Takeaway

As imaging volumes surge and antiquated platforms push radiologists to the breaking point, New Lantern Curie offers them a way to work like it’s 2025 instead of 2005 – automating the fragmentation and duplication out of their days so world-class radiologists like Shiva Suri’s mom can focus on what they do best: reading images.

Learn more about New Lantern and its all-in-one approach to radiology workflow in this Imaging Wire Show video interview

AI in Radiology: Old Problems, New Tech

By Mo Abdolell, CEO, Densitas

Radiology has seen this movie before. Big promises (efficiency, accuracy, burnout relief). Big anxieties (ROI, workflow chaos, pressure to “keep up”). The question isn’t whether AI is powerful. It’s whether we’ve learned how to deploy new technology without repeating the pain of PACS migrations and the EHR era.

The Myth of the Perfect Rollout. Health technology assessment (HTA) sounds great in theory – rigorous, comprehensive, evidence-first. In practice, few organizations have the time, talent, or budget to execute it at scale. 

  • Remember EHRs: adoption happened because policy and money forced it, not because the playbook was tidy. Healthcare’s default pattern is to adopt, then evolve – messy, market-driven, and iterative. Waiting for perfect plans is how you get left behind.

Are AI’s Problems really new?

  • Black box déjà vu. Radiology has long trusted complex, opaque systems (reconstruction algorithms, vendor-specific pipelines). What mattered – and still matters – is validated performance and dependable outputs, not full internal transparency.
  • Model drift ≈ old friends. We’ve always recalibrated clinical tools as populations and scanners change. Monitoring and revalidation are known problems, not alien ones.

What’s Different This Time? Unlike the top-down EHR mandate, AI is largely market-driven. That gives providers agency. 

  • AI solutions must save time, improve outcomes, or avoid costs – not just publish a ROC curve. They must show operational value inside the native radiology workflow.

Fortunately, there are ways to adopt AI and then evolve your processes to make it work…

  • Workflow or bust. Demand in-viewer evidence objects, one-click report insertion, and EHR write-back. If AI adds steps, it subtracts value.
  • Start narrow, scale deliberately. Pick high-volume, high-friction tasks. Prove value in weeks, not years. Expand only when the operational signal is undeniable.
  • Measure what matters. Track operational metrics like seconds saved and coverage (e.g. eligible cases processed before dictation), reliability (e.g. results present before finalization, fail-open behavior), and user friction like context-switching rate and time-to-evidence.
  • Monitor. Stand up organization and site-level performance checks. Treat AI like equipment – scheduled, observed, and maintained.
  • Invest in long-term value. Favor standards, vendor-agnostic interoperability, clear telemetry, and transparent pricing.

The Takeaway

AI’s success in radiology won’t be defined by elegance of algorithms but by pragmatism of deployment. This will be an evolution – hands-on, incremental, sometimes messy. The difference now is that radiology can drive. Make the technology serve the service line – not the other way around.

Target the toughest workflows. Adapt and evolve with Densitas Breast Imaging AI Suite.

AI First Drafts: A New Dawn for Radiology Reporting

For radiologists – the medical detectives who find clues in our medical images – the daily grind can feel like a “death by a thousand cuts.” Much of their time is spent not on diagnosis, but on tedious reporting. 

Now, a new generation of artificial intelligence is stepping in to serve as a high-tech scribe, automating the drudgery.

  • This AI tackles reporting, the most time-consuming part of radiologists’ workflow.

AI-enabled radiology reporting makes transcribing data from technologist worksheets a thing of the past, using Optical Character Recognition (OCR) to decipher everything, even what looks like “chicken scratch handwriting.” Then…

  • A large language model (LLM) applies clinical context to ensure it understands the meaning.
  • It intelligently injects that data into the correct sections of the radiologist’s personal report template.
  • Finally, it performs its own “inference,” like calculating a TI-RADS score and dropping it right into the impression.

Modern AI also learns from a radiologist’s actions, providing a hands-free way to build a report, with features such as…

Smart Measurements: When a lesion is measured, the AI recognizes the location and automatically adds the data and comparisons to prior scans into the report.

Automated Prior Population: Instead of struggling with speech-to-text, the AI notices when a prior study is opened for comparison and automatically populates that exam’s date.

Streamlined Expert Findings: A radiologist can simply state positive findings, and the AI acts as both writer and editor. 

AI-enabled radiology reporting weaves dictated phrases into complete sentences, generates an impression based on clinical guidelines like BI-RADS, and serves as a vigilant proofreader, flagging errors like laterality mistakes or semantic impossibilities. 

As AI technology matures, the software itself is becoming easier to build. The true differentiator is the team behind it. 

  • For radiologists evaluating these new reporting tools, it’s critical to look for teams that are “AI native” – built from the ground up with AI at their core. 

Companies founded on these principles, such as New Lantern, are pioneering these all-in-one radiology reporting solutions, treating the challenge not as a problem to be fixed with another widget, but as an opportunity to build one complete, intelligent platform. 

The Takeaway 

The evolution in AI-enabled radiology reporting isn’t about replacing radiologists; it’s a tool to augment their skills. Radiologists who harness AI to create reports faster will significantly outpace those who do not, allowing them to return their full focus to the art of diagnosis.

Does BMI Affect AI Accuracy?

High body mass index is known to create problems for various medical imaging modalities, from CT to ultrasound. Could it also affect the accuracy of artificial intelligence algorithms? Researchers asked this question as it pertains to lung nodule detection in a new study in European Journal of Radiology

X-ray photons attenuate as they pass through body tissue, which can decrease image quality and produce more noise.

  • This is particularly a challenge for CT exams that don’t use a lot of radiation, like low-dose CT lung screening. 

At the same time, AI algorithms are being developed to make LDCT screening more efficient, such as by identifying and classifying lung nodules.

  • But if high BMI makes CT images noisier, will that affect AI’s performance? Researchers from the Netherlands tested the idea in 352 patients who got LDCT screening as part of the Lifelines study.

Researchers compared patients at both the high end of the BMI spectrum (mean 39.8) and low end (mean 18.7). 

  • Lung nodule detection by both Siemens Healthineers’ AI-Rad Companion Chest CT algorithm and a human radiologist was performed and compared. 

Across the study population, researchers found…

  • There was no statistically significant difference in AI’s sensitivity between high and low BMI groups (0.75 vs. 0.80, p = 0.37). 
  • Nor was there any difference in the human radiologist’s sensitivity (0.76 vs. 0.84, p = 0.17).
  • AI had fewer false positives per scan in the high BMI group than low BMI (0.30 vs. 0.55), a difference that was statistically significant (p = 0.05). 
  • While the difference in false positives with the human radiologist was not statistically significant (0.05 vs. 0.16, p = 0.09).

The study authors attributed AI’s lower performance to more noise in the high BMI scans.

  • They recommended that AI developers include people with both high and low BMI in datasets used for training algorithms.

The Takeaway

The results offer some comfort that patient BMI probably doesn’t have a huge effect on AI performance for nodule detection in lung screening, but it suggests a possible effect that might have achieved statistical significance with a larger sample size. More study in the area is definitely needed given the rising importance of AI for CT lung cancer screening. 

Could States Take Over AI Regulation from the FDA?

Could states take over AI regulation from the FDA as a possible solution to the growing workforce shortage in radiology? It may seem like a wild idea at first, but it’s a question proposed in a special edition of Academic Radiology focusing on radiology and the law. 

Healthcare’s workforce shortage is no secret, and in radiology it’s manifested itself with tight supplies of both radiologists and radiologic technologists. 

  • AI has been touted as a potential solution to lighten the workload, such as by triaging images mostly likely to be normal from requiring immediate radiologist review. 

And autonomous AI – algorithms that operate without human oversight – are already nibbling at radiology’s fringes, with at least one company claiming its solution can produce full radiology reports without human intervention.

  • But the FDA is notoriously conservative when it comes to authorizing new technologies, and AI is no exception. So what’s to stop a state facing a severe radiologist shortage from adopting autonomous AI on its own to help out? 

The new article reviews the legal landscape behind both constitutional and state law, finding examples in which some states have successfully defied federal regulation – such as by legalizing marijuana use – if the issue has broad public support. 

But the authors eventually answer their own question in the negative, stating that it’s not likely states will usurp the FDA’s role regulating AI because…

  • The U.S. Constitution’s Supremacy and Commerce clauses ensure federal law will always supersede state law.
  • If AI made an error, malpractice regulation would be murky given a lack of legal precedent at the state level. 
  • Teleradiologists could opt out of providing care to a state if AI regulations were too burdensome – which could exacerbate the workforce crisis. 

The Takeaway

Ultimately, it’s not likely states will take over AI regulation from the FDA, even if the healthcare workforce shortage worsens significantly. But the Academic Radiology article is an interesting thought experiment that – in an environment in which U.S. healthcare policies have already been turned upside down – may not be so unthinkable after all. 

AI Spots Lung Nodules

A new study in Radiology on an AI algorithm for analyzing lung nodules on CT lung cancer screening exams shows that radiologists may be able to have their cake and eat it too: better identification of malignant nodules with lower false-positive rates. 

The rising utilization of low-dose CT screening is great news for clinicians (and eligible patients), but managing suspicious nodules remains a major challenge, as false-positive findings expose patients to unnecessary biopsies and costs.

  • False-positive rates have come down somewhat from the high rates seen in the big lung cancer screening clinical trials like NLST and NELSON, but there is still room for improvement.

Dutch researchers applied AI to the problem, developing a deep learning algorithm trained on 16.1k NLST nodules that produces a score from 0% to 100% based on a nodule’s likelihood of malignancy. 

  • They then tested the algorithm with baseline screening rounds of 4.1k patients from three datasets drawn from different lung cancer screening trials: NELSON, DLSCT in Denmark, and MILD in Italy.

The algorithm’s performance was compared to the Pan-Canadian Early Detection of Lung Cancer model, a widely used clinical guideline that uses patient characteristics like age and family history and nodule characteristics size and location to estimate risk.

Compared to PanCan, the deep learning algorithm…

  • Reduced false-positive findings sharply by classifying more benign cases as low risk (68% vs. 47%) when set at 100% sensitivity for cancers diagnosed within one year.
  • For all nodules, achieved comparable AUCs at one year (0.98 vs. 0.98), two years (0.96 vs. 0.94), and throughout screening (0.94 vs. 0.93).
  • For indeterminate nodules 5-15 mm, significantly outperformed PanCan at one year (0.95 vs. 0.91), two years (0.94 vs. 0.88), and throughout screening (0.91 vs. 0.86).

The model’s performance for indeterminate nodules is particularly intriguing, as these are challenging to manage due to their small size and can lead to unnecessary follow-up procedures.

The Takeaway

Using AI to differentiate malignant from benign nodules promises to make CT lung cancer screening more accurate and easier to perform than manual nodule classification methods – and should add to the exam’s growing momentum.

Get every issue of The Imaging Wire, delivered right to your inbox.