Bayer Steps Back from Blackford

Pharmaceutical giant Bayer said it plans to deprioritize its investment in AI platform company Blackford Analysis as part of a general move away from the platform business. Bayer is also winding down its investment in Calantic Digital Solutions, the digital platform company it formed in 2022. 

The move is a stunning turnaround for Blackford, which was founded in 2010 and was the first and perhaps most prominent of the digital AI platform companies. 

  • Bayer acquired Blackford in 2023, and operated it in parallel with Calantic, which also offered AI solutions in the platform format. 

Platform AI companies have a simple value proposition: rather than buy AI algorithms from multiple individual developers, hospitals and imaging facilities contract with a single platform company and pick and choose the solutions they need.

  • It’s a great idea, but platform providers face the same challenges as algorithm developers due to slower-than-expected AI clinical adoption. 

Bayer’s move was confirmed by company representatives, who noted that personnel will be maintained to support the Blackford AI platform and fulfill existing contractual commitments. 

  • “Bayer has made the decision to deprioritize its digital platform business, which includes Blackford, and will discontinue offerings and services. Resources will be reinvested into growth areas that support healthcare institutions around the world, in alignment with customer needs,” the representative said. 

And in a letter to customers obtained by The Imaging Wire, Blackford confirmed Bayer’s decision, stating that Blackford’s core team will remain in place led by COO James Holroyd during the transition. 

  • The company also said it would “discuss and facilitate opportunities to move existing Blackford contracts into direct deals with AI vendors, or alternate platform providers.”

Bayer’s withdrawal from the digital platform space includes the Calantic business, which Bayer formed three years ago to offer internally developed AI tools.

  • At the time, industry experts postulated that contrast agent companies had an inside track for radiology AI thanks to their contracts to supply consumables to customers – a theory that in retrospect hasn’t borne fruit.

Speculation about Blackford’s fate burst into the public eye late last week with a detailed LinkedIn post by healthcare recruiter Jay Gurney, who explained that while Blackford has been successful – and is sitting on a “monster pipeline” of hospital deals – it’s simply not a great fit for a pharmaceutical company. 

  • Despite Bayer’s withdrawal, Blackford could make a good acquisition candidate for a company without a strong AI portfolio that wants to quickly boost its position. 

The Takeaway

Bayer’s announcement that it’s winding down its Blackford and Calantic investments is sure to send shockwaves through the radiology AI industry, which is already struggling with slow clinical adoption and declining venture capital investment. The question is whether a white knight will ride to Blackford’s rescue.

Lunit Acquires Prognosia Breast Cancer Risk AI

AI developer Lunit is ramping up its position in breast cancer risk prediction by acquiring Prognosia, the developer of a risk prediction algorithm spun out from Washington University School of Medicine in St. Louis. The move will complement Lunit and Volpara’s existing AI models for 2D and 3D mammography analysis. 

Risk prediction has been touted as a better way to determine which women will develop breast cancer in coming years, and high-risk women can be managed more aggressively with more frequent screening intervals or the use of additional imaging modalities.

  • Risk prediction traditionally has relied on models like Tyrer-Cuzick, which is based on clinical factors like patient age, weight, breast density, and family history.

But AI advancements have been leveraged in recent years to develop algorithms that could be more accurate than traditional models.

  • One of these is Prognosia, founded in 2024 based on work conducted by Graham Colditz, MD, DrPH, and Shu (Joy) Jiang, PhD, at Washington University.

Their Prognosia Breast algorithm analyzes subtle differences and changes in 2D and 3D mammograms over time, such as texture, calcification, and breast asymmetry, to generate a score that predicts the risk of developing a new tumor.

Prognosia built on that momentum by submitting a regulatory submission to the FDA, and the application received Breakthrough Device Designation.

  • In conversations with The Imaging Wire, Colditz and Jiang believe AI-based estimates like those of Prognosia Breast will eventually replace the one-size-fits-all model of breast screening, with low-risk women screened less often and high-risk women getting more attention.

Colditz and Jiang are working with the FDA on marketing authorization, and once authorized Prognosia’s algorithm will enter a segment that’s drawing increased attention from AI developers.

  • The two will continue to work with Lunit as it moves Prognosia Breast into the commercialization phase and integrates the product with Lunit’s own offerings like the RiskPathways application in its Lunit Breast Suite and technologies it accessed through its acquisition of Volpara in 2024

The Takeaway

Lunit’s acquisition of Prognosia portends exciting times ahead for breast cancer risk prediction. Armed with tools like Prognosia Breast, clinicians will soon be able to offer mammography screening protocols that are far more tailored to women’s risk profiles than what’s been available in the past. 

Ensemble Mammo AI Combines Competing Algorithms

If one AI algorithm works great for breast cancer screening, would two be even better? That’s the question addressed by a new study that combined two commercially available AI algorithms and applied them in different configurations to help radiologists interpret mammograms.

Mammography AI is emerging as one of the primary use cases for medical AI, understandable given that breast imaging specialists have to sort through thousands of normal cases to find one cancer. 

Most of these studies applied a single AI algorithm to mammograms, but multiple algorithms are available, so why not see how they work together? 

  • This kind of ensemble approach has already been tried with AI for prostate MRI scans – for example in the PI-CAI challenge – but South Korean researchers writing in European Radiology believed it would be a novel approach for mammography.

So they combined two commercially available algorithms – Lunit’s Insight MMG and ScreenPoint Medical’s Transpara – and used them to analyze 3k screening and diagnostic mammograms.

  • Not only did the authors combine competing algorithms, but they adjusted the ensemble’s output to emphasize five different screening parameters, such as sensitivity and specificity, or by having the algorithms assess cases in different sequences.

The authors assessed ensemble AI’s accuracy and ability to reduce workload by triaging cases that didn’t need radiologist review, finding…

  • Outperformed single-algorithm AI’s sensitivity in Sensitive Mode (84% vs. 81%-82%) with an 18% radiologist workload reduction.
  • Outperformed single-algorithm AI’s specificity in Specific Mode (88% vs. 84%-85%) with a 42% workload reduction.
  • Had 82% sensitivity in Conservative Mode but only reduced workload by 9.8%.
  • Saw little difference in sensitivity based on which algorithm read mammograms first (80.3% and 80.8%), but both approaches reduced workload 50%.

The authors suggested that if applied in routine clinical use, ensemble AI could be tailored based on each breast imaging practice’s preferences and where they felt they needed the most help.

The Takeaway

The new results offer an intriguing application of the ensemble AI strategy to mammography screening. Given the plethora of breast AI algorithms available and the rise of platform AI companies that put dozens of solutions at clinicians’ fingertips, it’s not hard to see this approach being put into clinical practice soon.

Unpacking Heartflow IPO’s Lessons for AI Firms

Cardiac AI specialist Heartflow went public last week, and the IPO was a watershed moment for the imaging AI segment. The question is whether Heartflow is blazing a path to be followed by other AI developers or if the company is a shooting star that’s more likely to be admired from afar than emulated.

First the details: Heartflow went public August 8, raising $317M by issuing 16.7M shares at $19 each – and finishing up 50% for the day. 

  • The IPO beat analyst expectations, which originally estimated gross proceeds of $215M, and put the company’s market capitalization at $2.5B – well within the mid-cap stock category. 

So what’s so special about this IPO? Heartflow’s flagship product is FFRCT Analysis, which uses AI-based software to calculate fractional flow reserve – a measure of heart health – from coronary CT angiography scans. 

  • This eliminates the need for an invasive pressure-wire catheter to be threaded into the heart.

Heartflow got an early start in the FFR-CT segment by nabbing FDA clearance for Heartflow FFRCT Analysis in 2014, and since then has been the single most successful AI company in winning reimbursement from both CMS and private payors.

  • In fact, a 2023 analysis of AI reimbursement found that FFRCT Analysis was the top AI product by number of submitted CPT claims, at 67.3k claims – over 4X more than the next product on the list.

That’s created a revenue stream for Heartflow that clearly bucks the myth that clinicians aren’t getting paid for AI.

  • And in an IPO filing with the SEC, Heartflow revealed how reimbursement is driving revenue growth, which was up 44% in 2024 over 2023 ($125.8M vs. $87.2M, respectively). 

But it’s not all sunshine and rainbows at the Mountain View, California company, which posted significant net losses for both 2024 and 2023 ($96.4M and $95.7M).

  • As a public company, Heartflow may have a shorter leash in getting to profitability had it remained privately held.

But the bigger picture is what Heartflow’s IPO means for the imaging AI segment as a whole. 

  • It’s easily the biggest IPO by a pure-play imaging IT vendor in years, and dispels the conventional wisdom that investors are shying away from the sector.

The Takeaway

Heartflow’s IPO shows that in spite of clinical AI’s shortcomings (slow adoption, sluggish reimbursement, etc.), it’s still generating significant investor interest. The company’s focus on achieving both clinical and financial milestones (i.e. reimbursement) should be an example for other AI developers.

Prostate AI Improves Biparametric MRI

Researchers continue to hone in on the best way to use MRI for patients suspected of having prostate cancer, and AI is helping the effort. A new study in AJR shows that AI can improve the diagnostic accuracy and consistency of prostate MRI – while making it easier to perform.

Multiparametric MRI is the gold standard for prostate cancer imaging, but requires the use of three different MRI sequences as well as contrast administration, making it more complex and time-intensive to perform. 

  • On the other hand, biparametric MRI uses just two sequences – T2-weighted and diffusion-weighted imaging – and omits the contrast entirely, leading to shorter scan times and lower cost.

But what are you losing with bpMRI – and can AI help you get it back? Researchers addressed this question in the new study in which six radiologists interpreted bpMRI scans of 180 patients from multiple centers. 

  • Radiologists used a deep learning algorithm developed at the NIH to interpret bpMRI scans acquired on 3T scanners. The open-source algorithm generates binary prostate cancer prediction maps that are overlaid on T2-weighted images.

Researchers found that radiologists using the bpMRI AI algorithm to detect clinically significant prostate cancer had…

  • An increase in lesion-level positive predictive value (77% vs. 67%).
  • But lower lesion-level sensitivity (44% vs. 48%). 
  • And no statistically significant difference in patient-level AUC (0.82 vs. 0.83, p = 0.61).
  • While inter-reader agreement scores improved for lesion-level and patient-level PI-RADS scores and lesion size measurements. 

What to make of the numbers? The authors pointed out that the study design – in which AI was used as a first reader – may have reduced AI’s performance.

  • In real clinical practice, AI would most likely be used as a sort of clinical spell checker, with AI results overlaid on images that radiologists had already seen. 

The researchers said the results on improved positive predictive value and inter-reader agreement show that AI can improve the diagnostic accuracy and consistency of bpMRI for prostate cancer. 

The Takeaway

The new findings echo other research like the PI-CAI study highlighting the growing role of AI in prostate cancer detection. If validated with other studies, they show AI-assisted bpMRI could be ready to take on mpMRI for a broader role.

RP Builds AI Mosaic as Company’s IT Foundation

Radiology Partners announced a new initiative to guide the rollout of AI across its nationwide network of radiology practices. The company’s new MosaicOS will be the IT foundation that connects RP practices and supports clinical uses from AI-assisted reporting to report generation and even image management.

Radiology Partners has grown since its founding in 2012 to become the largest privately held provider of imaging services in the U.S. and a major force behind the consolidation of private-practice radiology groups.

  • RP has always maintained a heavy technology investment, and has been looking closely at the rise of AI in radiology.

That’s because the growth in imaging volume is so massive that clinicians will no longer be able to care for patients adequately without AI’s assistance, at least according to RP’s Associate Chief Medical Officer for Clinical AI Nina Kottler, MD.

RP laid the groundwork for MosaicOS in 2020 by first migrating its technology stack to a cloud-native infrastructure. 

  • This frees RP from reliance on on-premises legacy software and enables the company to push out updates that can be adopted quickly across its network.

RP’s Mosaic rollout includes the following components as the company…

  • Forms a new division, Mosaic Clinical Technologies, to oversee its AI activities.
  • Debuts MosaicOS, a cloud-native operating system that combines AI support with workflow and other IT tools.
  • Launches Mosaic Reporting, an automated structured reporting solution that combines ambient voice AI with large language model technology.
  • Develops Mosaic Drafting, a multimodal AI foundation model that pre-drafts X-ray reports that radiologists can review, edit, and sign. 

Mosaic Reporting is already in use at some RP sites, and the company is pursuing FDA clearance for broader use of Mosaic Drafting. More Mosaic applications are on the way.

  • Mosaic tools will be disseminated to RP centers using the cloud-native infrastructure, and MosaicOS will include image management functions that providers can choose to use in place of or alongside existing tools like viewers and archives. 

Kottler told The Imaging Wire that RP has de-emphasized individual pixel-based AI models in favor of foundation models that have broader application.

  • What’s more, RP CEO Rich Whitney said the company has chosen to develop AI technology internally rather than rely on outside vendors, as this gives it greater control over its own AI adoption.

The Takeaway

The launch of MosaicOS marks an exciting milestone not only for Radiology Partners but also for radiology in general that could address nagging concerns about clinical AI adoption on a broad scale. RP has not only the network but also the technology resources to make the rollout a success – the question is whether outside AI developers will share in the rewards.

Radiology AI Approvals Near 1k in New FDA Update

The FDA last week released the long-awaited update to its list of AI-enabled medical devices that have received marketing authorization. The closely watched list shows the number of AI-enabled radiology authorizations approaching the 1k mark.

The FDA has been tracking authorizations of AI-enabled devices going back to 1995, and the list gives industry watchers a feel for not only how quickly the agency is churning out reviews but also which medical specialties are generating the most approvals.

  • But the last time the FDA released an updated list was August 2024, and recent turmoil at the agency had some observers wondering if it would continue the tradition – as well as whether it could stay on pace for new approvals.

Those fears should be assuaged with the new release. The numbers indicate that through May 2025 the FDA has…

  • Granted authorization to 1.2k AI-enabled medical devices since it started tracking.
  • Approved 956 AI-enabled radiology products, or 77% of total medical authorizations.
  • Radiology’s share of overall authorizations from January to May 2025 ticked up to 78% (115/148), compared to 73% in the 2024 update, and 80% in all of 2023.
  • GE HealthCare remains the company with the most radiology AI authorizations, at 96 (including recent acquisitions like Caption Health and MIM Software), with Siemens Healthineers in second place at 80 (including Varian). 
  • Other notable mentions include Philips (42 including DiA Analysis), Canon (35), United Imaging (32), and Aidoc (30). 

In a significant regulatory development, the FDA said it was developing a plan to identify and tag medical devices that use foundation models, including large language models and multimodal architecture. 

  • The agency said the program would help healthcare providers and patients know when LLM-based functionality was included in a medical device (the FDA has yet to approve a medical device with LLM technology). 

In another interesting change, the FDA dropped “machine learning” from the title of its list, apparently with the idea that “AI” was sufficient as an umbrella term. 

The Takeaway

The FDA’s release of its AI approval list is a welcome return to past practices that should reassure agency watchers that recent turmoil isn’t affecting its basic operations. The LLM guidance suggests the agency may be changing its approach to the technology in favor of disclosure and transparency instead of more stringent regulation that could delay some LLM solutions from reaching the market.

AI-Driven Diagnostics Detects Multiple Chest Diseases from Single CT Scan

A new generation of AI solutions is enabling clinicians to detect multiple chest pathologies from a single CT scan. Lung cancer, cardiovascular disease, and chronic obstructive pulmonary disease (COPD) can all be detected in just one imaging session, ushering in a new era of more efficient imaging that benefits both providers and patients. 

Advances in CT lung cancer screening have been generating headlines as new research highlights the improved clinical outcomes possible when lung cancer is detected early. 

  • But lung cancer is just one of a “big three” of thoracic comorbidities – the others being cardiovascular disease and COPD – that can result from long-term exposure to toxic substances like tobacco smoke. 

These co-morbidities will be encountered more often as health systems ramp up lung cancer screening efforts, creating challenges for radiologists in managing the many incidental findings discovered with chest CT scans.

  • And it’s common knowledge that radiologists already have their hands full in an era of personnel shortages and rising imaging volumes. 

Fortunately, new AI technologies offer a solution. One of these is Coreline Soft’s AVIEW LCS Plus, an integrated three-in-one solution that enables simultaneous detection of lung cancer, cardiovascular disease, and COPD from a single chest CT scan. 

  • AVIEW LCS Plus is the only solution adopted for national lung cancer screening projects across key countries, including Korea (K-LUCAS), Germany (HANSE), Italy (RISP), and the pan-European consortium (4ITLR). 

Coreline’s solution is widely recognized as a pioneering AI platform for an era where unexpected findings can save lives, gaining increasing attention in academic journals and health policy reports alike.

  • In the U.S., AVIEW LCS Plus has been adopted by Temple Health, and the Pennsylvania system’s use of the solution in their Temple Healthy Chest initiative has been recognized as an innovative healthcare model within the Philadelphia region. 

Temple Health clinicians are finding that AI contributes to early detection of incidental findings, improved survival rates, and more proactive care planning.

  • AVIEW LCS Plus is streamlining lung cancer screening, helping identify chest conditions at earlier stages, when treatment is most effective. It is finding not only lung nodules but also undetected comorbidities that were often missed with conventional CT workflow. 

Coreline Soft will be presenting AVIEW LCS Plus in collaboration with Temple Health at the upcoming American Thoracic Society (ATS 2025) international conference in San Francisco from May 16-21. 

  • Attendees will be able to learn how AI in medical imaging can establish new standards, not just in diagnostics, but across policy, patient care, and hospital strategy. 

Getting Paid for AI – Will It Get Easier?

Reimbursement is one of the major stumbling blocks holding back wider clinical adoption of artificial intelligence. But new legislation was introduced into the U.S. Congress last week that could ease AI’s reimbursement path. 

For AI developers, getting an algorithm approved is just the first step toward commercial acceptance. 

  • Perhaps even more important than FDA clearance is Medicare reimbursement, as healthcare providers are reluctant to use a product they won’t get paid for. 

Reimbursement drives clinical AI adoption, as evidenced by a 2023 analysis listing the top algorithms by CPT claims submitted (Heartflow Analysis topped the list). 

  • But CMS uses a patchwork system governing reimbursement, from temporary codes like New Technology Add-On Payment codes that expire after 2-3 years to G-codes for procedures that don’t have CPT codes, on up to the holy grail of medical reimbursement: Category I codes. 

The new legislationS.1399 or the Health Tech Investment Act – would simplify the situation by setting up a dedicated Medicare coverage pathway for AI-enabled medical devices approved by the FDA (called “algorithm-based healthcare services”), as follows … 

  • All FDA-approved products would be assigned a Category III New Technology Ambulatory Payment Classification in the HOPPS program.
  • NTAPC codes would last for five years to enable collection of cost data before a permanent payment code is assigned. 
  • Payment classifications will be based on the cost of service as estimated by the manufacturer. 

The bill at present has co-sponsors from both political parties, Sen. Mike Rounds (R-SD) and Sen. Martin Heinrich (D-NM). 

  • The legislation has also drawn support from industry heavyweights like GE HealthCare and Siemens Healthineers, as well as industry groups like AdvaMed and others.

The Takeaway

The new bill sounds like a great idea, but it’s easy to be skeptical about its prospects in today’s highly charged political environment – especially when even bipartisan compromises like the 2025 Medicare fix got scuttled. Still, S.1399’s introduction at least shows that the highest levels of the U.S. government are cognizant of the need to improve clinical AI reimbursement.

Radiology’s Rising Workload

If you think new imaging IT technologies will reduce radiologist workload in the future, you might want to think again. Researchers who analyzed hundreds of studies on new scientific advances predicted that nearly half of them would increase radiologists’ workload – especially AI. 

Radiologists are desperately in need of help to manage rising imaging volumes during a period of global workforce shortages. 

But how true is that belief? In the new study in European Journal of Radiology, radiologists Thomas Kwee, MD, and Robert Kwee, MD, from the Netherlands analyzed a random sample of 416 articles published in 2024 on imaging applications that could affect future radiologist workloads, finding …

  • 49% of the articles on applications that had the potential to directly impact patient care would increase radiologist workload in the tertiary care academic setting. 
  • Studies on AI-focused applications were 14X more likely to increase workload compared to research that didn’t.
  • Similar numbers were found for non-academic general teaching hospitals.
  • The findings are largely similar to a 2019 study by Kwee et al that used the same methodology.  

Why don’t new imaging applications show more potential to reduce radiologists’ workloads? 

  • The Kwees found that image post-processing and interpretation times have grown for both existing and new applications. 

In the specific case of AI, they cited an example in which a deep learning algorithm was introduced to analyze CT scans to segment and classify features of spontaneous intracerebral hemorrhage and predict hematoma expansion.

  • The model successfully predicted hematoma expansion and automatically segmented lesions, but CT images still had to be post-processed with a separate workflow. This required additional radiologist interpretation time and extended their workload.

The Takeaway

The new study throws cold water on the idea that AI will be able to solve radiology’s workload dilemma. It’s possible that AI will have an impact on radiology that’s similar to that of PACS in the 1990s in making radiologists more productive, but we’ll need new efficiency-oriented changes to achieve that goal.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!