AI-Driven Diagnostics Detects Multiple Chest Diseases from Single CT Scan

A new generation of AI solutions is enabling clinicians to detect multiple chest pathologies from a single CT scan. Lung cancer, cardiovascular disease, and chronic obstructive pulmonary disease (COPD) can all be detected in just one imaging session, ushering in a new era of more efficient imaging that benefits both providers and patients. 

Advances in CT lung cancer screening have been generating headlines as new research highlights the improved clinical outcomes possible when lung cancer is detected early. 

  • But lung cancer is just one of a “big three” of thoracic comorbidities – the others being cardiovascular disease and COPD – that can result from long-term exposure to toxic substances like tobacco smoke. 

These co-morbidities will be encountered more often as health systems ramp up lung cancer screening efforts, creating challenges for radiologists in managing the many incidental findings discovered with chest CT scans.

  • And it’s common knowledge that radiologists already have their hands full in an era of personnel shortages and rising imaging volumes. 

Fortunately, new AI technologies offer a solution. One of these is Coreline Soft’s AVIEW LCS Plus, an integrated three-in-one solution that enables simultaneous detection of lung cancer, cardiovascular disease, and COPD from a single chest CT scan. 

  • AVIEW LCS Plus is the only solution adopted for national lung cancer screening projects across key countries, including Korea (K-LUCAS), Germany (HANSE), Italy (RISP), and the pan-European consortium (4ITLR). 

Coreline’s solution is widely recognized as a pioneering AI platform for an era where unexpected findings can save lives, gaining increasing attention in academic journals and health policy reports alike.

  • In the U.S., AVIEW LCS Plus has been adopted by Temple Health, and the Pennsylvania system’s use of the solution in their Temple Healthy Chest initiative has been recognized as an innovative healthcare model within the Philadelphia region. 

Temple Health clinicians are finding that AI contributes to early detection of incidental findings, improved survival rates, and more proactive care planning.

  • AVIEW LCS Plus is streamlining lung cancer screening, helping identify chest conditions at earlier stages, when treatment is most effective. It is finding not only lung nodules but also undetected comorbidities that were often missed with conventional CT workflow. 

Coreline Soft will be presenting AVIEW LCS Plus in collaboration with Temple Health at the upcoming American Thoracic Society (ATS 2025) international conference in San Francisco from May 16-21. 

  • Attendees will be able to learn how AI in medical imaging can establish new standards, not just in diagnostics, but across policy, patient care, and hospital strategy. 

Getting Paid for AI – Will It Get Easier?

Reimbursement is one of the major stumbling blocks holding back wider clinical adoption of artificial intelligence. But new legislation was introduced into the U.S. Congress last week that could ease AI’s reimbursement path. 

For AI developers, getting an algorithm approved is just the first step toward commercial acceptance. 

  • Perhaps even more important than FDA clearance is Medicare reimbursement, as healthcare providers are reluctant to use a product they won’t get paid for. 

Reimbursement drives clinical AI adoption, as evidenced by a 2023 analysis listing the top algorithms by CPT claims submitted (Heartflow Analysis topped the list). 

  • But CMS uses a patchwork system governing reimbursement, from temporary codes like New Technology Add-On Payment codes that expire after 2-3 years to G-codes for procedures that don’t have CPT codes, on up to the holy grail of medical reimbursement: Category I codes. 

The new legislationS.1399 or the Health Tech Investment Act – would simplify the situation by setting up a dedicated Medicare coverage pathway for AI-enabled medical devices approved by the FDA (called “algorithm-based healthcare services”), as follows … 

  • All FDA-approved products would be assigned a Category III New Technology Ambulatory Payment Classification in the HOPPS program.
  • NTAPC codes would last for five years to enable collection of cost data before a permanent payment code is assigned. 
  • Payment classifications will be based on the cost of service as estimated by the manufacturer. 

The bill at present has co-sponsors from both political parties, Sen. Mike Rounds (R-SD) and Sen. Martin Heinrich (D-NM). 

  • The legislation has also drawn support from industry heavyweights like GE HealthCare and Siemens Healthineers, as well as industry groups like AdvaMed and others.

The Takeaway

The new bill sounds like a great idea, but it’s easy to be skeptical about its prospects in today’s highly charged political environment – especially when even bipartisan compromises like the 2025 Medicare fix got scuttled. Still, S.1399’s introduction at least shows that the highest levels of the U.S. government are cognizant of the need to improve clinical AI reimbursement.

Radiology’s Rising Workload

If you think new imaging IT technologies will reduce radiologist workload in the future, you might want to think again. Researchers who analyzed hundreds of studies on new scientific advances predicted that nearly half of them would increase radiologists’ workload – especially AI. 

Radiologists are desperately in need of help to manage rising imaging volumes during a period of global workforce shortages. 

But how true is that belief? In the new study in European Journal of Radiology, radiologists Thomas Kwee, MD, and Robert Kwee, MD, from the Netherlands analyzed a random sample of 416 articles published in 2024 on imaging applications that could affect future radiologist workloads, finding …

  • 49% of the articles on applications that had the potential to directly impact patient care would increase radiologist workload in the tertiary care academic setting. 
  • Studies on AI-focused applications were 14X more likely to increase workload compared to research that didn’t.
  • Similar numbers were found for non-academic general teaching hospitals.
  • The findings are largely similar to a 2019 study by Kwee et al that used the same methodology.  

Why don’t new imaging applications show more potential to reduce radiologists’ workloads? 

  • The Kwees found that image post-processing and interpretation times have grown for both existing and new applications. 

In the specific case of AI, they cited an example in which a deep learning algorithm was introduced to analyze CT scans to segment and classify features of spontaneous intracerebral hemorrhage and predict hematoma expansion.

  • The model successfully predicted hematoma expansion and automatically segmented lesions, but CT images still had to be post-processed with a separate workflow. This required additional radiologist interpretation time and extended their workload.

The Takeaway

The new study throws cold water on the idea that AI will be able to solve radiology’s workload dilemma. It’s possible that AI will have an impact on radiology that’s similar to that of PACS in the 1990s in making radiologists more productive, but we’ll need new efficiency-oriented changes to achieve that goal.

VC AI Funding Plummets

If you thought venture capital funding of AI firms was lower last year, you weren’t wrong. A new report from market analysis firm Signify Research found that VC funding of radiology AI firms dropped by nearly half in 2024 compared to the year before. 

VC funding has become a closely watched barometer of the radiology AI segment’s overall health. 

  • As most AI developers haven’t generated significant cash flows from product revenues yet, VC money is the oxygen that keeps AI firms breathing. 

And there are signs that after peaking in 2021, that oxygen is coming into short supply. 

  • Signify’s report last year documented a 19% drop in VC AI funding in 2023, a development attributed to tighter access to capital due to high interest rates. 

Those trends continued into 2024, with the new Signify report finding …

  • Total VC funding was $335.5M, down 48% compared to $645.6M in 2023.
  • The number of funding rounds fell 35% (20 vs. 31), to the lowest level since 2015.
  • Average deal size fell 16% ($16.8M vs. $20.1M).
  • Cleerly raised the most in 2024 with $106M in funding, followed by Qure.ai with $65M (putting Qure into Signify’s elite “$100M Club”). 
  • Funding declines were even worse in the Asia-Pacific (-84%) and the Europe, Middle East, and Africa regions (-76%) compared to the peak in 2021. 

Signify analyst Umar Ahmed noted that 2025 got off to a strong start, with $100M in funding rounds announced in January.

  • This could either represent a rebound in investor confidence, or indicate that the AI funding cycle is getting longer as VC firms put developers under the microscope and demand better ROI for their investments. 

The Takeaway

So it’s agreed that 2024 was a wash for VC radiology AI funding – what about 2025? The year’s strong start appears to have petered out as we approach the spring quarter, and ongoing regulatory turbulence and economic uncertainty in the U.S. isn’t likely to help. Stay tuned. 

New Mammography AI Insights

Breast screening is becoming one of the most promising use cases for AI, but there’s still a lot we’re learning about it. A new study in Radiology: Artificial Intelligence revealed new insights into how well mammography AI performs in a screening environment. 

As we’ve reported in the past, mammography is one of radiology’s most challenging cancer screening exams, with radiologists sorting through large volumes of normal images before encountering a case that might be cancer.

In the new study, researchers applied Lunit’s Insight MMG algorithm to mammograms in a retrospective study of 136.7k women screened in British Columbia from 2019 to 2020. 

  • Canada uses single reading for mammography, unlike the double-reading protocols employed in the U.K. and Europe. 

AI’s performance was compared to single-reading radiologists using various metrics and follow-up periods, finding … 

  • At one-year follow-up, AI had slightly lower sensitivity (89% vs. 93%) and specificity (79% vs. 92%) compared to radiologists.
  • At two-year follow-up, there was no statistically significant difference in sensitivity between the two (83.5% vs. 84.3%, p=0.69). 
  • AI’s overall AUC at one year was 0.93, but this varied based on mammographic and demographic features, with AI performing better in cases with fatty versus dense breasts (0.96 vs. 0.84) and cases with architectural distortion (0.96 vs. 0.92) but worse in cases with calcifications (0.87 vs. 0.92).

The researchers then constructed hypothetical scenarios in which AI might be used to assist radiologists, finding …

  • If radiologists only read cases ruled abnormal by AI, it would reduce workload by 78%, but at a price of reduced sensitivity (86% vs. 93%) and 59 missed cancers across the cohort.

It’s worth noting that Insight MMG is designed to analyze 2D digital mammography exams.

The Takeaway

While the new findings aren’t a slam dunk for mammography AI, they do provide valuable insight into its performance that can inform future research, especially into areas where AI could use improvement. 

Will FDA Staff Cuts Slow AI Adoption?

The Trump Administration’s campaign to cut the federal workforce arrived at the FDA last weekend – in particular its division regulating AI in healthcare. Multiple staff cuts were reported at the Center for Devices and Radiological Health, which had been in the midst of a major overhaul of AI regulation. 

A February 15 article in STAT News first reported the layoffs, which as with other recent staff reductions concentrated on FDA employees with probationary status and was part of a larger initiative that has also affected the CDC and NIH. 

The rapid growth of medical AI has had a major impact on the center, which as of its last report had given regulatory authorization to over 1k AI-enabled devices (76% of which are for radiology). 

  • To deal with the deluge, CDRH reportedly had been hiring many new staffers who were still on probationary status, making them targets for layoffs (permanent federal employees have civil service protections that make them harder to fire). 

FDA also has been retooling its regulatory approach to AI with new initiatives that reflect the fact that AI products continue learning (and changing) after they’ve been approved, and thus require more aggressive post-market surveillance than other medical devices…

So what impact – if any – will the layoffs have on the rapidly growing medical AI segment? 

  • The FDA may simply scale back its new AI initiatives and regulate the field under more traditional avenues that have served the medical device industry well for decades.

In another scenario, the FDA’s frenzied pace of AI approvals and initiatives could slow as the agency struggles to handle a growing number of product submissions with less staff. 

The Takeaway

The FDA layoffs couldn’t have come at a worse time for medical AI, which is on the cusp of wider clinical acceptance but still suffers from shaky confidence and poor understanding on the part of both providers and patients (see story below). The question is whether providers, organized radiology, or developers themselves will be able to step into the gap being left.

AI As Malpractice Safety Net

One of the emerging use cases for AI in radiology is as a safety net that could help hospitals avoid malpractice cases by catching errors made by radiologists before they can cause patient harm. The topic was reviewed in a Sunday presentation at RSNA 2024

Clinical AI adoption has been held back by economic factors such as limited reimbursement and the lack of strong return on investment. 

  • Healthcare providers want to know that their AI investments will pay off, either through direct reimbursement from payors or improved operational efficiency.

At the same time, providers face rising malpractice risk, with a number of recent high-profile legal cases.

  • For example, a New York hospital was hit with a $120M verdict after a resident physician working the night shift missed a pulmonary embolism. 

Could AI limit risk by acting as a backstop to radiologists? 

  • At RSNA 2024, Benjamin Strong, MD, chief medical officer at vRad, described how they have deployed AI as a QA safety net. 

vRad mostly develops its own AI algorithms, with the first algorithm deployed in 2015. 

  • vRad is running AI algorithms as a backstop for 13 critical pathologies, from aortic dissection to superior mesenteric artery occlusion.

vRad’s QA workflow begins after the radiologist issues a final report (without using AI), and an algorithm then reviews the report automatically. 

  • If discrepancies are found the report is sent to a second radiologist, who can kick the study back to the original radiologist if they believe an error has occurred. The entire process takes 20 minutes. 

In a review of the program over one year, vRad found …

  • Corrections were made for about 1.5k diagnoses out of 6.7M exams.
  • The top five AI models accounted for over $8M in medical malpractice savings. 
  • Three pathologies – spinal epidural abscess, aortic dissection, and ischemic bowel due to SMA occlusion – would have amounted to $18M in payouts over four years.
  • Adding intracranial hemorrhage and pulmonary embolism creates what Strong called the “Big Five” of pathologies that are either the most frequently missed or the most expensive when missed.

The Takeaway

The findings offer an intriguing new use case for AI adoption. Avoiding just one malpractice verdict or settlement would more than pay for the cost of AI installation, in most cases many times over. How’s that for return on investment?

How Are Doctors Using AI?

How are healthcare providers who have adopted AI really using it? A new Medscape/HIMSS survey found that most providers are using AI for administrative tasks, while medical image analysis is also one of the top AI use cases. 

AI has the potential to revolutionize healthcare, but many industry observers have been frustrated with the slow pace of clinical adoption. 

  • Implementation challenges, regulatory issues, and lack of reimbursement are among the reasons keeping more healthcare providers from embracing the technology.

But the Medscape/HIMSS survey shows some early successes for AI … as well as lingering questions. 

  • Researchers surveyed a total of 846 people in the U.S. who were either executive or clinical leaders, practicing physicians or nurses, or IT professionals, and whose practices were already using AI in some way.

The top four tasks for which AI is being used were administrative rather than clinical, with image analysis occupying the fifth spot … 

  1. Transcribing patient notes (36%). 
  2. Transcribing business meetings (32%).
  3. Creating routine patient communications (29%).
  4. Performing patient record-keeping (27%).
  5. Analyzing medical images (26%).

The survey also analyzed attitudes toward AI, finding …

  • 57% said AI helped them be more efficient and productive.
  • But lower marks were given for reducing staff hours (10%) and lowering costs (31%).
  • AI got the highest marks for helping with transcription of business meetings (77%) and patient notes (73%), reviewing medical literature (72%), and medical image analysis (70%).

The findings track well with developments at last week’s RSNA 2024, where AI algorithms dedicated to non-clinical tasks like radiology report generation, scheduling, and operation analysis showed growing prominence. 

  • Indeed, many AI developers have specifically targeted the non-clinical space, both because commercialization is easier (FDA authorization is not typically needed) and because doctors often say they need more help with administrative rather than clinical tasks.

The Takeaway

While it’s easy to be impatient with AI’s slow uptake, the Medscape/HIMSS survey shows that AI adoption is indeed occurring at medical practices. And while image analysis was radiology’s first AI use case, speeding up workflow and administrative tasks may end up being the technology’s most impactful application.

How Should AI Be Monitored?

Once an AI algorithm has been approved and moves into clinical use, how should its performance be monitored? This question was top of mind at last week’s meeting of the FDA’s new Digital Health Advisory Committee.

AI has the potential to radically reshape healthcare and help clinicians manage more patients with fewer staff and other resources. 

  • But AI also represents a regulatory challenge because it’s constantly learning, such that after a few years an AI algorithm might be operating much differently from the version first approved by the FDA – especially with generative AI. 

This conundrum was a point of discussion at last week’s DHAC meeting, which was called specifically to focus on regulation of generative AI, and could result in new rules covering all AI algorithms. (An executive summary that outlines the FDA’s thinking is available for download.)

Radiology was well-represented at DHAC, understandable given it has the lion’s share of authorized algorithms (73% of 950 devices at last count). 

  • A half-dozen radiology AI experts gave presentations over two days, including Parminder Bhatia of GE HealthCare; Nina Kottler, MD, of Radiology Partners; Pranav Rajpurkar, PhD, of Harvard; and Keith Dreyer, DO, PhD, and Bernardo Bizzo, MD, PhD, both of Mass General Brigham and the ACR’s Data Science Institute.  

Dreyer and Bizzo directly addressed the question of post-market AI surveillance, discussing ongoing efforts to track AI performance, including … 

The Takeaway

Last week’s DHAC meeting offers a fascinating glimpse at the issues the FDA is wrestling with as it contemplates stronger regulation of generative AI. Fortunately, radiology has blazed a trail in setting up structures like ARCH-AI and Assess-AI to monitor AI performance, and the FDA is likely to follow the specialty’s lead as it develops a regulatory framework.

Low-Dose CT Confounds CAD in Kids

When it comes to pediatric CT scans, clinicians should make every effort to reduce dose as much as possible. But a new study in AJR indicates that lower CT radiation dose can affect the performance of software tools like computer-aided detection. 

Initiatives like the Image Wisely and Image Gently projects have succeeded in raising awareness of radiation dose and have helped radiologists find ways to reduce it.

But every little bit counts in pediatric dose reduction, especially given that one CT exam can raise the risk of developing cancer by 0.35%. 

  • Imaging tools like AI and CAD could help, but there have been few studies examining the performance of pulmonary CAD software developed for adults in analyzing scans of children.

To address that gap, researchers including radiologists from Cincinnati Children’s Hospital Medical Center investigated the performance of two open-source CAD algorithms trained on adults for detecting lung nodules in 73 patients with a mean age of 14.7 years. 

  • The algorithms included FlyerScan, a CAD developed by the authors, and MONAI, an open-source project for deep learning in medical imaging. 

Scans were acquired at standard-dose (mean effective dose=1.77 mSv) and low-dose (mean effective dose=0.32 mSv) levels, with the results showing that both algorithms turned in lower performance at lower radiation dose for nodules 3-30 mm … 

  • FlyerScan saw its sensitivity decline (77% vs. 67%) and detected fewer 3mm lung nodules (33 vs. 24).
  • MONAI also saw lower sensitivity (68% vs. 62%) and detected fewer 3mm lung nodules (16 vs. 13).
  • Reduced sensitivity was more pronounced for nodules less than 5 mm.

The findings should be taken with a grain of salt, as the open-source algorithms were not originally trained on pediatric data.

  • But the results do underscore the challenge in developing image analysis software optimized for pediatric applications.

The Takeaway

With respect to low radiation dose and high AI accuracy in CT scans of kids, radiologists may not be able to have their cake and eat it too – yet. More work will be needed before AI solutions developed for adults can be used in children.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!