Acute Chest Pain CXR AI

Patients who arrive at the ED with acute chest pain (ACP) syndrome end up receiving a series of often-negative tests, but a new MGB-led study suggests that CXR AI might make ACP triage more accurate and efficient.

The researchers trained three ACP triage models using data from 23k MGH patients to predict acute coronary syndrome, pulmonary embolism, aortic dissection, and all-cause mortality within 30 days. 

  • Model 1: Patient age and sex
  • Model 2: Patient age, sex, and troponin or D-dimer positivity
  • Model 3: CXR AI predictions plus Model 2

In internal testing with 5.7k MGH patients, Model 3 predicted which patients would experience any of the ACP outcomes far more accurately than Models 2 and 1 (AUCs: 0.85 vs. 0.76 vs. 0.62), while maintaining performance across patient demographic groups.

  • At a 99% sensitivity threshold, Model 3 would have allowed 14% of the patients to skip additional cardiovascular or pulmonary testing (vs. Model 2’s 2%).

In external validation with 22.8k Brigham and Women’s patients, poor AI generalizability caused Model 3’s performance to drop dramatically, while Models 2 and 1 maintained their performance (AUCs: 0.77 vs. 0.76 vs. 0.64). However, fine-tuning with BWH’s own images significantly improved the performance of the CXR AI model (from 0.67 to 0.74 AUCs) and Model 3 (from 0.77 to 0.81 AUCs).

  • At a 99% sensitivity threshold, the fine-tuned Model 3 would have allowed 8% of BWH patients to skip additional cardiovascular or pulmonary testing (vs. Model 2’s 2%).

The Takeaway

Acute chest pain is among the most common reasons for ED visits, but it’s also a major driver of wasted ED time and resources. Considering that most ACP patients undergo CXR exams early in the triage process, this proof-of-concept study suggests that adding CXR AI could improve ACP diagnosis and significantly reduce downstream testing.

CXR AI’s Screening Generalizability Gap

A new European Radiology study detailed a commercial CXR AI tool’s challenges when used for screening patients with low disease prevalence, bringing more attention to the mismatch between how some AI tools are trained and how they’re applied in the real world.

The researchers used an unnamed commercial AI tool to detect abnormalities in 3k screening CXRs sourced from two healthcare centers (2.2% w/ clinically significant lesions), and had four radiology residents read the same CXRs with and without AI assistance, finding that the AI:

  • Produced a far lower AUROC than in its other studies (0.648 vs. 0.77–0.99)
  • Achieved 94.2% specificity, but just 35.3% sensitivity
  • Detected 12 of 41 pneumonia, 3 of 5 tuberculosis, and 9 of 22 tumors 
  • Only “modestly” improved the residents’ AUROCs (0.571–0.688 vs. 0.534–0.676)
  • Added 2.96 to 10.27 seconds to the residents’ average CXR reading times

The researchers attributed the AI tool’s “poorer than expected” performance to differences between the data used in its initial training and validation (high disease prevalence) and the study’s clinical setting (high-volume, low-prevalence, screening).

  • More notably, the authors pointed to these results as evidence that many commercial AI products “may not directly translate to real-world practice,” urging providers facing this kind of training mismatch to retrain their AI or change their thresholds, and calling for more rigorous AI testing and trials.

These results also inspired lively online discussions. Some commenters cited the study as proof of the problems caused by training AI with augmented datasets, while others contended that the AI tool’s AUROC still rivaled the residents and its “decent” specificity is promising for screening use.

The Takeaway

We cover plenty of studies about AI generalizability, but most have explored bias due to patient geography and demographics, rather than disease prevalence mismatches. Even if AI vendors and researchers are already aware of this issue, AI users and study authors might not be, placing more emphasis on how vendors position their AI products for different use cases (or how they train it).

Guerbet’s Big AI Investment

Guerbet took a big step towards advancing its AI strategy, acquiring a 39% stake in French imaging software company Intrasense, and revealing ambitious future plans for their combined technologies.

Through Intrasense, Guerbet gains access to a visualization and AI platform and a team of AI integration experts to help bring its algorithms into clinical use. The tie-up could also create future platform and algorithm development opportunities, and the expansion of their technologies across Guerbet’s global installed base.

The €8.8M investment (€0.44/share, a 34% premium) could turn into a €22.5M acquisition, as Guerbet plans to file a voluntary tender offer for all remaining shares.

Even though Guerbet is a €700M company and Intrasense is relatively small (~€3.8M 2022 revenue, 67 employees on LinkedIn), this seems like a significant move given and Guerbet’s increasing emphasis on AI:

What Guerbet was lacking before now (especially since ending its Merative/IBM alliance) was a future AI platform – and Intrasense should help fill that void. 

If Guerbet acquires Intrasense it would continue the recent AI consolidation wave, while adding contrast manufacturers to the growing list of previously-unexpected AI startup acquirers (joining imaging center networks, precision medicine analytics companies, and EHR analytics firms). 

However, contrast manufacturers could play a much larger role in imaging AI going forward, considering the high priority that Bayer is placing on its Calantic AI platform.

The Takeaway

Guerbet has been promoting its AI ambitions for several years, and this week’s Intrasense investment suggests that the French contrast giant is ready to transition from developing algorithms to broadly deploying them. That would take a lot more work, but Guerbet’s scale and imaging expertise makes it worth keeping an eye on if you’re in the AI space.

Medical Imaging in 2022

For our final issue of 2022 we’re reflecting on some of the year’s biggest radiology storylines, including some trends that might have a major impact in 2023 and beyond.

“Post-COVID” – Radiology teams thankfully scanned and assessed far fewer COVID patients in 2022, but the pandemic was still partially responsible for most of the trends included in this recap.

Imaging Labor Crunch – Many organizations still didn’t have enough radiologists and technologists to keep up with their imaging volumes this year, driving up labor costs and making efficiency even more important.

Hospital Margin Crunch – There’s a very good chance that the hospitals you work for or sell to had a tough financial year in 2022, placing greater importance on initiatives/technologies that earn or save them money (and address their labor challenges).

AI Evolution – If a radiology outsider read a random Imaging Wire issue they might think that radiologists already use AI every day. We know that isn’t true, but imaging AI’s 2022 progress suggests that we’re slowly heading in that direction.

New Mega Practice Paradigm – After years of massive national expansions, recent unfavorable shifts in surprise billing reimbursements, radiologist staffing (costs & shortages), and the lending environment seemed to have caused large PE-backed radiology groups to pivot their 2022 strategies from practice growth to practice optimization.

The Patient Engagement Push – Radiology patient engagement gained momentum in 2022, as imaging teams and vendors worked to make imaging more accessible and understandable, more patient-centric imaging startups emerged, and radiology departments continued to get better at follow-up management.

The AI Shakeup – Everyone who has been predicting AI consolidation took a victory lap in 2022, which brought at least two strategic pivots (MaxQ AI & Kheiron) and the acquisitions of Aidence and Quantib (by RadNet), Nines (by Sirona), Arterys (by Tempus), MedoAI (by Exo), and Predible (by nference). This trend should continue in 2023, as VCs remain selective and larger AI players extend their lead over their smaller competitors.

Imaging Leaves the Hospital – Between the surge of hospital-at-home initiatives and payors’ efforts to move imaging exams to outpatient settings, imaging’s shift beyond hospital walls continued throughout 2022 and doesn’t seem to be slowing as we head into 2023.

Federated Learning’s Glioblastoma Milestone

AI insiders celebrated a massive new study highlighting a federated learning AI model’s ability to delineate glioblastoma brain tumors with high accuracy and generalizability, while demonstrating FL’s potential value for rare diseases and underrepresented populations.

The UPenn-led research team went big, as the study’s 71 sites in 6 continents made it the largest FL project to-date, its 6,314 patients’ mpMRIs created the biggest glioblastoma (GBM) dataset ever, and its nearly 280 authors were the most we’ve seen in a published study. 

The researchers tested their final GBM FL consensus model twice – first using 20% of the “local” mpMRIs from each site that weren’t used in FL training, and second using 590 “out-of-sample” exams from 6 sites that didn’t participate in FL development.

These FL models achieved significant improvements compared to an AI model trained with public data for delineating the three main GBM tumor sub-compartments that are most relevant for treatment planning.

  • Surgically targetable tumor core: +33% w/ local, +27% w/ out-of-sample
  • Enhancing tumor: +27% w/ local, +15% w/ out-of-sample
  • Whole tumor: +16% w/ local, +16% w/ out-of-sample data

The Takeaway

Federated learning’s ability to improve AI’s performance in new settings/populations while maintaining patient data privacy has become well established in the last few years. However, this study takes FL’s resume to the next level given its unprecedented scope and the significant complexity associated with mpMRI glioblastoma exams, suggesting that FL will bring a “paradigm shift for multi-site collaborations.”

The Mammography AI Generalizability Gap

The “radiologists with AI beat radiologists without AI” trend might have achieved mainstream status in Spring 2020, when the DM DREAM Challenge developed an ensemble of mammography AI solutions that allowed radiologists to outperform rads who weren’t using AI.

The DM DREAM Challenge had plenty of credibility. It was produced by a team of respected experts, combined eight top-performing AI models, and used massive training and validation datasets (144k & 166k exams) from geographically distant regions (Washington state, USA & Stockholm, Sweden).

However, a new external validation study highlighted one problem that many weren’t thinking about back then. Ethnic diversity can have a major impact on AI performance, and the majority of women in the two datasets were White.

The new study used an ensemble of 11 mammography AI models from the DREAM study (the Challenge Ensemble Model; CEM) to analyze 37k mammography exams from UCLA’s diverse screening program, finding that:

  • The CEM model’s UCLA performance declined from the previous Washington and Sweden validations (AUROCs: 0.85 vs. 0.90 & 0.92)
  • The CEM model improved when combined with UCLA radiologist assessments, but still fell short of the Sweden AI+rads validation (AUROCs: 0.935 vs. 0.942)
  • The CEM + radiologists model also achieved slightly lower sensitivity (0.813 vs. 0.826) and specificity (0.925 vs. 0.930) than UCLA rads without AI 
  • The CEM + radiologists method performed particularly poorly with Hispanic women and women with a history of breast cancer

The Takeaway

Although generalization challenges and the importance of data diversity are everyday AI topics in late 2022, this follow-up study highlights how big of a challenge they can be (regardless of training size, ensemble approach, or validation track record), and underscores the need for local validation and fine-tuning before clinical adoption. 

It also underscores how much we’ve learned in the last three years, as neither the 2020 DREAM study’s limitations statement nor critical follow-up editorials mentioned data diversity among the study’s potential challenges.

Google Launches Cloud Medical Imaging Suite

Google announced what might be its biggest, or at least most public, push into medical imaging AI with the launch of its new Google Cloud Medical Imaging Suite.

The Suite directly targets organizations who are developing imaging AI models and performing advanced image-based analytics, while also looking to improve Google’s positioning in the healthcare cloud race.

The Medical Imaging Suite is (logically) centered around Google Cloud’s image storage and Healthcare API, which combine with its DICOMweb-based data exchange and automated DICOM de-identification tech to create a cloud-based AI development environment. Meanwhile, its “Suite” title is earned through integrations with an array of Google and partner solutions:

  • NVIDIA’s annotation tools (including its MONAI toolkit) to help automate image labeling
  • Google’s BigQuery and Looker solutions to search and analyze imaging data, and create training datasets
  • Google’s Vertex AI environment to accelerate AI pipeline development
  • NetApp’s hybrid cloud services to support on-premise-to-cloud data management
  • Google’s Anthos solution for centralized policy management and enforcement
  • Change Healthcare’s cloud-native enterprise imaging PACS for clinical use

It’s possible that many of these solutions were already available to Google Cloud users, and it appears that AWS and Azure have a similar list of imaging capabilities/partners, so this announcement might prove to be more technologically significant if it leads to Google Cloud creating a differentiated and/or seamlessly-integrated suite going forward.

However, the announcement’s marketing impact was immediate, as press articles and social media conversations largely celebrated Google Cloud’s new role in solving imaging’s interoperability and AI development problems. It’s been a while since we’ve seen AWS or Azure gain imaging headlines or public praise like that, and they’re the healthcare cloud market share leaders.

The Takeaway

Although some might debate whether the Medical Imaging Suite’s features are all that new, last week’s launch certainly reaffirms Google Cloud’s commitment to medical imaging (with an AI development angle), and suggests that we might see more imaging-targeted efforts from them going forward.

Arterys and Tempus’ Precision Merger

Arterys was just acquired by precision medicine AI powerhouse Tempus Labs, marking perhaps the biggest acquisition in the history of imaging AI, and highlighting the segment’s continued shift beyond traditional radiology use cases. 

Arterys has become one of imaging’s AI platform and cardiac MRI 4D flow leaders, leveraging its 12 years of work and $70M in funding to build out a large team of imaging/AI experts, a solid customer base, and an attractive intellectual property portfolio (AI models, cloud viewer, and a unique multi-vendor platform).

Tempus Labs might not be a household name among Imaging Wire readers, but they’ve become a giant in the precision medicine AI space, using $1.1B in VC funding and the “largest library of clinical & molecular data” to develop a range of precision medicine and treatment discovery / development / personalization capabilities.

It appears that Arterys will continue to operate its core radiology AI business (with far more financial support), while supporting the imaging side of Tempus’s products and strategy.

This acquisition might not be as unprecedented as some think. We’ve seen imaging AI assume a central role within a number of next-generation drug discovery/development companies, including Owkin and nference (who recently acquired imaging AI startup Predible), while imaging AI companies like Quibim are targeting both clinical use and pharma/life sciences applications.

Of course, many will point out how this acquisition continues 2022’s AI shakeup, which brought at least five other AI acquisitions (Aidence & Quantib by RadNet; Nines by Sirona, MedoAI by Exo, Predible by nference) and two strategic pivots (MaxQ AI & Kheiron). Although these acquisitions weren’t positive signs for the AI segment, they revealed that imaging AI startups are attractive to a far more diverse range of companies than many could have imagined back in 2021 (including pharma and life sciences).

The Takeaway

Arterys just transitioned from being an independently-held leader of the (promising but challenged) diagnostic imaging AI segment to being a key part of one of the hottest companies in healthcare AI, all while managing to keep its radiology business intact. That might not be the exit that Arterys’ founders envisioned, but in many ways it’s an ideal second chapter.

Plaque AI’s First Reimbursement

The small list of cardiac imaging AI solutions to earn Medicare reimbursements just got bigger, following CMS’ move to add an OPPS code for AI-based coronary plaque assessments. That represents a major milestone for Cleerly, who filed for this code and leads the plaque AI segment, and it marks another sign of progress for the business of imaging AI.

With CMS’ October 1st OPPS update, Cleerly and other approved plaque AI solutions now qualify for $900 to $1,000 reimbursements when used with Medicare patients scanned in hospital outpatient settings. 

  • That achievement sets the stage for plaque AI’s next major reimbursement hurdle: gaining coverage from local Medicare Administrative Contractors (MACs) and major commercial payers.

Cleerly and its qualifying plaque AI competitors join a growing list of Medicare-reimbursed imaging AI solutions, headlined by HeartFlow’s FFRCT solution ($930-$950) and Perspectum’s LiverMultiScan MRI software ($850-$1,150), both of which have since expanded their reimbursements across MAC regions and major commercial payers. 

  • The last few years also brought temporary NTAP reimbursements for Viz.ai (LVO detection / coordination), Caption Health (echo AI guidance), and Optellum (lung cancer risk assessments), plus a growing number of imaging AI CPT III codes that might lead to future reimbursements.

The new reimbursement should also drive advancements within the CCTA plaque AI segment, giving providers more incentive to adopt this technology, and providing emerging plaque AI vendors (e.g. Elucid, Artrya) a clearer path towards commercialization and VC funding.

The Takeaway

CMS’ new plaque AI OPPS code marks a major milestone for Cleerly’s commercial and clinical expansion, and a solid step for the plaque AI segment. 

The reimbursement also adds momentum for the overall imaging AI industry, which finally seems to be gaining support from CMS. That’s good news for AI vendors, since it’s pretty much proven that reimbursements drive AI adoption and are often necessary to show ROI.

Imaging AI Funding Still Solid in 2022

Despite plenty of challenges, imaging AI startups appear to be on pace for another solid funding year, helped by a handful of huge raises and a diverse mix of early-to-mid stage rounds.

So far in 2022 we’ve covered 18 AI funding events that totaled $615M, putting imaging AI startups roughly on pace for 2021’s record-high funding levels ($815M based on Signify’s analysis). Those funding rounds revealed a number of interesting trends:

  • The Big Getting Bigger – $442M of this year’s funding (72% of total) came from just four later-stage rounds: Aidoc ($110M), Viz.ai ($100M), Cleerly ($192M), and Qure.ai ($40M), as VCs increasingly bet on AI’s biggest players. 
  • Rounding Up the Rest – The remaining 14 companies raised a combined $173M (28% of total), with an even mix of Seed/Pre-Seed (4 rounds, $10.5M), Series A (5, $74M), and Series B (5, $89M) rounds. 
  • VCs Heart Cardiovascular AI – Cardiovascular AI startups captured a disproportionate share of VC funding, as Cleerly ($192M) was joined by Elucid ($27M) and Us2.ai ($15M). Considering that Circle CVI was recently acquired for $213M and HeartFlow has raised over $577M, cardiac AI startups seem to have become imaging AI’s valuation leaders (at least alongside diversified and care coordination AI vendors).
  • No H2 Drop-Off (yet) – The funding breakdown between Q1 (6 rounds, $63.5M), Q2 (7, $289M), and Q3 (5, $263M) doesn’t suggest that we’re in the middle of a second-half slowdown… even though we probably are. 

The Takeaway

Despite widespread AI consolidation chatter in Q1 and the emergence of economic headwinds by Q2, imaging AI startups are on pace for yet another massive funding year. These numbers don’t reveal how many otherwise-solid AI startups are struggling to secure their next funding round, and they don’t guarantee that funding will also be strong in 2023, but they do suggest that 2022’s AI funding won’t be nearly as bleak as some naysayers warned.

Get every issue of The Imaging Wire, delivered right to your inbox.

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

Another important feature of the best 10 dollar minimum deposit online casino is casino licensing. The best online casinos are regulated by regulators and must meet set standards in order to keep their clients happy. Regulatory bodies such as the UK Gambling Commission, the Malta Gaming Authority, and the Kahnawake Gaming Commission oversee casinos and ensure that they adhere to their rules. Licensed casinos will not accept players under the legal age limit, and they will have to audit their games to ensure fairness and safety.

-- The Imaging Wire team