Imaging AI Funding Still Solid in 2022

Despite plenty of challenges, imaging AI startups appear to be on pace for another solid funding year, helped by a handful of huge raises and a diverse mix of early-to-mid stage rounds.

So far in 2022 we’ve covered 18 AI funding events that totaled $615M, putting imaging AI startups roughly on pace for 2021’s record-high funding levels ($815M based on Signify’s analysis). Those funding rounds revealed a number of interesting trends:

  • The Big Getting Bigger – $442M of this year’s funding (72% of total) came from just four later-stage rounds: Aidoc ($110M), Viz.ai ($100M), Cleerly ($192M), and Qure.ai ($40M), as VCs increasingly bet on AI’s biggest players. 
  • Rounding Up the Rest – The remaining 14 companies raised a combined $173M (28% of total), with an even mix of Seed/Pre-Seed (4 rounds, $10.5M), Series A (5, $74M), and Series B (5, $89M) rounds. 
  • VCs Heart Cardiovascular AI – Cardiovascular AI startups captured a disproportionate share of VC funding, as Cleerly ($192M) was joined by Elucid ($27M) and Us2.ai ($15M). Considering that Circle CVI was recently acquired for $213M and HeartFlow has raised over $577M, cardiac AI startups seem to have become imaging AI’s valuation leaders (at least alongside diversified and care coordination AI vendors).
  • No H2 Drop-Off (yet) – The funding breakdown between Q1 (6 rounds, $63.5M), Q2 (7, $289M), and Q3 (5, $263M) doesn’t suggest that we’re in the middle of a second-half slowdown… even though we probably are. 

The Takeaway

Despite widespread AI consolidation chatter in Q1 and the emergence of economic headwinds by Q2, imaging AI startups are on pace for yet another massive funding year. These numbers don’t reveal how many otherwise-solid AI startups are struggling to secure their next funding round, and they don’t guarantee that funding will also be strong in 2023, but they do suggest that 2022’s AI funding won’t be nearly as bleak as some naysayers warned.

AI Crosses the Chasm

Despite plenty of challenges, Signify Research forecasts that the global imaging AI market will nearly quadruple by 2026, as AI “crosses the chasm” towards widespread adoption. Here’s how Signify sees that transition happening:

Market Growth – After generating global revenues of around $375M in 2020 and $400M and 2021, Signify expects the imaging AI market to maintain a massive 27.6% CAGR through 2026 when it reaches nearly $1.4B. 

Product-Led Growth – This growth will be partially driven by the availability of new and more-effective AI products, following:

  • An influx of new regulatory-approved solutions
  • Continued improvements to current products (e.g. adding triage to detection tools)
  • AI leaders expanding into new clinical segments
  • AI’s evolution from point solutions to comprehensive solutions/workflows
  • The continued adoption AI platforms/marketplaces

The Big Four – Imaging AI’s top four clinical segments (breast, cardiology, neurology, pulmonology) represented 87% of the AI market in 2021, and those segments will continue to dominate through 2026. 

VC Support – After investing $3.47B in AI startups between 2015 and 2021, Signify expects that VCs will remain a market growth driver, while their funding continues to shift toward later stage rounds. 

Remaining Barriers – AI still faces plenty of barriers, including limited reimbursements, insufficient economic/ROI evidence, stricter regulatory standards (especially in EU), and uncertain future prioritization from healthcare providers and imaging IT vendors. 

The Takeaway

2022 has been a tumultuous year for AI, bringing a number of notable achievements (increased adoption, improving products, new reimbursements, more clinical evidence, big funding rounds) that sometimes seemed to be overshadowed by AI’s challenges (difficult funding climate, market consolidation, slower adoption than previously hoped).  

However, Signify’s latest research suggests that 2022’s ups-and-downs might prove to be part of AI’s path towards mainstream adoption. And based on the steeper growth Signify forecasts for 2025-2026 (see chart above), the imaging AI market’s growth rate and overall value should become far greater after it finally “crosses the chasm.”

RevealDx & contextflow’s Lung CT Alliance

RevealDx and contextflow announced a new alliance that should advance the companies’ product and distribution strategies, and appears to highlight an interesting trend towards more comprehensive AI solutions.

The companies will integrate RevealDx’s RevealAI-Lung solution (lung nodule characterization) with contextflow’s SEARCH Lung CT software (lung nodule detection and quantification), creating a uniquely comprehensive lung cancer screening offering. 

contextflow will also become RevealDx’s exclusive distributor in Europe, adding to RevealDx’s global channel that includes a distribution alliance with Volpara (exclusive in Australia/NZ, non-exclusive in US) and a platform integration deal with Sirona

The alliance highlights contextflow’s new partner-driven strategy to expand SEARCH Lung CT beyond its image-based retrieval roots, coming just a few weeks after announcing an integration with Oxipit’s ChestEye Quality AI solution to identify missed lung nodules.

In fact, contextflow’s AI expansion efforts appear to be part of an emerging trend, as AI vendors work to support multiple steps within a given clinical activity (e.g. lung cancer assessments) or spot a wider range of pathologies in a given exam (e.g. CXRs):

  • Volpara has amassed a range of complementary breast cancer screening solutions, and has started to build out a similar suite of lung cancer screening solutions (including RevealDx & Riverain).
  • A growing field of chest X-ray AI vendors (Annalise.ai, Lunit, Qure.ai, Oxipit, Vuno) lead with their ability to detect multiple findings from a single CXR scan and AI workflow. 
  • Siemens Healthineers’ AI-RAD Companion Chest CT solution combines these two approaches, automating multiple diagnostic tasks (analysis, quantification, visualization, results generation) across a range of different chest CT exams and organs.

The Takeaway

contextflow and RevealDx’s European alliance seems to make a lot of sense, allowing contextflow to enhance its lung nodule detection/quantification findings with characterization details, while giving RevealDx the channel and lung nodule detection starting points that it likely needs.

The partnership also appears to represent another step towards more comprehensive and potentially more clinically valuable AI solutions, and away from the narrow applications that have dominated AI portfolios (and AI critiques) before now.

AI Experiences & Expectations

The European Society of Radiology just published new insights into how imaging AI is being used across Europe and how the region’s radiologists view this emerging technology.

The Survey – The ESR reached out to 27,700 European radiologists in January 2022 with a survey regarding their experiences and perspectives on imaging AI, receiving responses from just 690 rads.

Early Adopters – 276 the 690 respondents (40%) had clinical experience using imaging AI, with the majority of these AI users:

  • Working at academic and regional hospitals (52% & 37% – only 11% at practices)
  • Leveraging AI for interpretation support, case prioritization, and post-processing (51.5%, 40%, 28.6%)

AI Experiences – The radiologists who do use AI revealed a mix of positive and negative experiences:

  • Most found diagnostic AI’s output reliable (75.7%)
  • Few experienced technical difficulties integrating AI into their workflow (17.8%)
  • The majority found AI prioritization tools to be “very helpful” or “moderately helpful” for reducing staff workload (23.4% & 62.2%)
  • However, far fewer reported that diagnostic AI tools reduced staff workload (22.7% Yes, 69.8% No)

Adoption Barriers – Most coverage of this study will likely focus on the fact that only 92 of the surveyed rads (13.3%) plan to acquire AI in the future, while 363 don’t intend to acquire AI (52.6%). The radiologists who don’t plan to adopt AI (including those who’ve never used AI) based their opinions on:

  • AI’s lack of added value (44.4%)
  • AI not performing as well as advertised (26.4%)
  • AI adding too much work (22.9%)
  • And “no reason” (6.3%)

US Context – These results are in the same ballpark as the ACR’s 2020 US-based survey (33.5% using AI, only 20% of non-users planned to adopt within 5 years), although 2020 feels like a long time ago.

The Takeaway

Even if this ESR survey might leave you asking more questions (What about AI’s impact on patient care? How often is AI actually being used? How do opinions differ between AI users and non-users?), more than anything it confirms what many of us already know… We’re still very early in AI’s evolution, and there’s still plenty of performance and perception barriers that AI has to overcome.

Imaging AI’s Unseen Potential

Amid the dozens of imaging AI papers and presentations that came out over the last few weeks were three compelling new studies highlighting how much “unseen” information AI can extract from medical images, and the massive impact this information could have. 

Imaging-Led Population Health – An excellent presentation from Ayis Pyrros, MD placed radiology at the center of healthcare’s transition to value-based care and population health, highlighting the AI training opportunities that will come with more value-based care HCC codes and imaging AI’s untapped potential for early disease detection and management. Dr. Pyrros specifically emphasized chest X-ray’s potential given the exam’s ubiquity (26M Medicare CXRs in 2021), CXR AI’s ability to predict outcomes (e.g. mortality, comorbidities, hospital stays), and how opportunistic AI screening can/should support proactive care that benefits both patients and health systems.

  • Healthcare’s value-based overhaul has traditionally been seen as a threat to radiology’s fee-for-service foundations. Even if that might still be true from a business model perspective, Dr. Pyrros makes it quite clear that the shift to value-based care could make radiology even more important — and importance is always good for business.

AI Race Detection – The final peer-reviewed version of the landmark study showing that AI models can accurately predict patient race was officially published, further confirming that AI can detect patients’ self-reported race by analyzing medical image features. The new paper showed that AI very accurately detects patient race across modalities and anatomical regions (AUCs: CXRs 0.91 – 0.99, chest CT 0.89 – 0.96, mammography 0.81), without relying on proxies or imaging-related confounding features (BMI, disease distribution, and breast density all had ≤0.61 AUCs).

  • If imaging AI models intended for clinical tasks can identify patients’ races, they could be applying the same racial biomarkers to diagnosis, thus reproducing or exacerbating healthcare’s existing racial disparities. That’s an important takeaway whether you’re developing or adopting AI.

CXR Cost Predictions – The smart folks at the UCSF Center for Intelligent Imaging developed a series of CXR-based deep learning models that can predict patients’ future healthcare costs. Developed with 21,872 frontal CXRs from 19,524 patients, the best performing models were able to relatively accurately identify which patients would have a top-50% personal healthcare cost after one, three, and five years (AUCs: 0.806, 0.771, 0.729). 

  • Although predicting which patients will have higher costs could be useful on its own, these findings also suggest that similar CXR-based DL models could be used to flag patients who may deteriorate, initiate proactive care, or support healthcare cost analysis and policies.

The Case for Algorithmic Audits

A new Lancet Digital Health study could have become one of the many “AI rivals radiologists” papers that we see each week, but it instead served as an important lesson that traditional performance tests might not prove that AI models are actually safe for clinical use.

The Model – The team developed their proximal femoral fracture detection DL model using 45.7k frontal X-rays performed at Australia’s Royal Adelaide Hospital (w/ 4,861 fractures).

The Validation – They then tested it against a 4,577-exam internal set (w/ 640 fractures), 400 of which were also interpreted by five radiologists (w/ 200 fractures), and against an 81-image external validation set from Stanford.

The Results – All three tests produced results that a typical study might have viewed as evidence of high-performance: 

  • The model outperformed the five radiologists (0.994 vs. 0.969 AUCs)
  • It beat the best performing radiologist’s sensitivity (95.5% vs. 94.5%) and specificity (99.5% vs 97.5%)
  • It generalized well with the external Stanford data (0.980 AUC)

The Audit – Despite the strong results, a follow-up audit revealed that the model might make some predictions for the wrong reasons, suggesting that it is unsafe for clinical deployment:

  • One false negative X-ray included an extremely displaced fracture that human radiologists would catch
  • X-rays featuring abnormal bones or joints had a 50% false negative rate, far higher than the reader set’s overall false negative rate (2.5%)
  • Salience maps showed that AI decisions were almost never based on the outer region of the femoral neck, even with images where that region was clinically relevant (but it still often made the right diagnosis)
  • The model scored a high AUC with the Stanford data, but showed a substantial model operating point shift

The Case for Auditing – Although the study might have not started with this goal, it ended up becoming an argument for more sophisticated preclinical auditing. It even led to a separate paper outlining their algorithmic auditing process, which among other things suggested that AI users and developers should co-own audits.

The Takeaway

Auditing generally isn’t the most exciting topic in any field, but this study shows that it’s exceptionally important for imaging AI. It also suggests that audits might be necessary for achieving the most exciting parts of AI, like improving outcomes and efficiency, earning clinician trust, and increasing adoption.A new Lancet Digital Health study could have become one of the many “AI rivals radiologists” papers that we see each week, but it instead served as an important lesson that traditional performance tests might not prove that AI models are actually safe for clinical use.

Imaging AI’s Big 2021

Signify Research’s latest imaging AI VC funding report revealed an unexpected surge in 2021, along with major funding shifts that might explain why many of us didn’t see it coming. Here’s some of Signify’s big takeaways and here’s where to get the full report.

AI’s Path to $3.47B – Imaging AI startups have raised $3.47B in venture funding since 2015, helped by a record-high $815M in 2021 after several years of falling investments (vs. 2020’s $592M, 2019’s $450M, 2018’s $790M).

Big Get Bigger – That $3.47B funding total came from over 200 companies and 290 deals, although the 25 highest-funded companies were responsible for 80% of all capital raised. VCs  increased their focus on established AI companies in 2021, resulting in record-high late-stage funding (~$723.5M), record-low Pre-Seed/Seed funding (~$7M), and a major increase in average deal size (~$33M vs. ~$12M in 2020). 

Made in China – If you’re surprised that 2021 was a record AI funding year, that’s probably because it targeted Chinese companies (~$260M vs. US’ ~$150M), continuing a recent trend (China’s AI VC share was 45% in 2020, 26% in 2019). We’re also seeing major funding go to South Korea and Australia’s top startups, adding to APAC AI vendors’ funding leadership.

Health VC Context – Although imaging AI’s $815M 2021 funding total seems big for a category that’s figuring out its path towards full adoption, the amount VC firms are investing in other areas of healthcare makes it seem pretty reasonable. Our two previous Digital Health Wire issues featured seven digital health startup funding rounds with a total value of $267M (and that’s from just one week).

The Takeaway

Signify correctly points out that imaging AI funding remains strong despite a list of headwinds (COVID, regulatory hurdles, lacking reimbursements), while showing more signs of AI market maturation (larger funding rounds to fewer players) and suggesting that consolidation is on the way. Those factors will likely continue in 2022. However, more innovation is surely on the way too and quite a few regional AI powerhouses still haven’t expanded globally, suggesting that the next steps in AI’s evolution won’t be as straightforward as some might think.

Autonomous AI Milestone

Just as the debate over whether AI might replace radiologists is starting to fade away, Oxipit’s ChestLink solution became the first regulatory-approved imaging AI product intended to perform diagnoses without involving radiologists (*please see editor’s note below regarding Behold.ai). That’s a big and potentially controversial milestone in the evolution of imaging AI and it’s worth a deeper look.

About ChestLink – ChestLink autonomously identifies CXRs without abnormalities and produces final reports for each of these “normal” exams, automating 15% to 40% of reporting workflows.

Automation Evidence – Oxipit has already piloted ChestLink in supervised settings for over a year, processing over 500k real-world CXRs with 99% sensitivity and no clinically relevant errors.

The Rollout – With its CE Class IIb Mark finalized, Oxipit is now planning to roll out ChestLink across Europe and begin “fully autonomous” operation by early 2023. Oxipit specifically mentioned primary care settings (many normal CXRs) and large-scale screening projects (high volumes, many normal scans) in its announcement, but ChestLink doesn’t appear limited to those use cases.

ChestLink’s ability to address radiologist shortages and reduce labor costs seem like strong and unique advantages. However, radiology’s first regulatory approved autonomous AI solution might face even stronger challenges:

  • ChestLink’s CE Mark doesn’t account for country-specific regulations around autonomous diagnostic reporting (e.g. the UK requires “appropriate reporting” with ionizing radiation-based exams)
  • Radiologist societies historically push back against anything that might undermine radiologists’ clinical roles, earning potential, and future career stability
  • Health systems’ evidence requirements for any autonomous AI tools would likely be extremely high, and they might expect similarly high economic ROI in order to justify the associated diagnostic or reputational risks
  • Even the comments in Oxipit’s LinkedIn announcement had a much more skeptical tone than we typically see with regulatory approval announcements

The Takeaway

Autonomous AI products like ChestLink could address some of radiology’s greatest problems (radiologist overwork, staffing shortages, volume growth, low access in developing countries) and their economic value proposition is far stronger than most other diagnostic AI products.

However, autonomous AI solutions could also face more obstacles than any other imaging AI products we’ve seen so far, suggesting that it would take a combination of excellent clinical performance and major changes in healthcare policies/philosophies in order for autonomous AI to reach mainstream adoption.

*Editor’s Note – April 21, 2022: Behold.ai insists that it is the first imaging AI company to receive regulatory approval for autonomous AI. Its product is used with radiologist involvement and local UK guidelines require that radiologists read exams that use ionizing radiation. All above analysis regarding the possibilities and challenges of autonomous AI applies to any autonomous AI vendor in the current AI environment, including both Oxipit and Behold.ai.

Complementary PE AI

A new European Radiology study out of France highlighted how Aidoc’s pulmonary embolism AI solution can serve as a valuable emergency radiology safety net, catching PE cases that otherwise might have been missed and increasing radiologists’ confidence. 

Even if that’s technically what PE AI products are supposed to do, studies using commercially available products and focusing on how AI complements radiologists (vs. comparing AI and rad accuracy) are still rare and worth a closer look.

The Diagnostic Study – A team from French telerad provider, IMADIS, analyzed AI and radiologist CTPA interpretations from patients with suspected PE (n = 1,202 patients), finding that:

  • Aidoc PE achieved higher sensitivity (0.926 vs. 0.9 AUCs) and negative predictive value (0.986 vs. 0.981 AUCs)
  • Radiologists achieved higher specificity (0.991 vs. 0.958 AUCs), positive predictive value (0.95 vs. 0.804 AUCs), and accuracy (0.977 vs. 0.953 AUCs)
  • The AI tool flagged 219 suspicious PEs, with 176 true positives, including 19 cases that were missed by radiologists
  • The radiologists detected 180 suspicious PEs, with 171 true positives, including 14 cases that were missed by AI
  • Aidoc PE would have helped IMADIS catch 285 misdiagnosed PE cases in 2020 based on the above AI-only PE detection ratio (19 per 1,202 patients)  

The Radiologist Survey – Nine months after IMADIS implemented Aidoc PE, a survey of its radiologists (n = 79) and a comparison versus its pre-implementation PE CTPAs revealed that:

  • 72% of radiologists believed Aidoc PE improved their diagnostic confidence and comfort 
  • 52% of radiologists the said the AI solution didn’t impact their interpretation times
  • 14% indicated that Aidoc PE reduced interpretation times
  • 34% of radiologists believed the AI tool added time to their workflow
  • The solution actually increased interpretation times by an average of 7.2% (+1:03 minutes) 

The Takeaway

Now that we’re getting better at not obsessing over AI replacing humans, this is a solid example of how AI can complement radiologists by helping them catch more PE cases and make more confident diagnoses. Some radiologists might be concerned with false positives and added interpretation times, but the authors noted that AI’s PE detection advantages (and the risks of missed PEs) outweigh these potential tradeoffs.

Sirona Medical Acquires Nines AI, Talent

Sirona Medical announced its acquisition of Nines’ AI assets and personnel, representing notable milestones for Sirona’s integrated RadOS platform and the quickly-changing imaging AI landscape.

Acquisition Details – Sirona acquired Nines’ AI portfolio (data pipeline, ML engines, workflow/analytics tools, AI models) and key team members (CRO, Direct of Product, AI engineers), while Nines’ teleradiology practice was reportedly absorbed by one of its telerad customers. Terms of the acquisition weren’t disclosed, although this wasn’t a traditional acquisition considering that Sirona and Nines had the same VC investor.

Sirona’s Nines Strategy – Sirona’s mission is to streamline radiologists’ overly-siloed workflows with its RadOS radiology operating system (unifies: worklist, viewer, reporting, AI, etc.), and it’s a safe bet that any acquisition or investment Sirona makes is intended to advance this mission. With that…

  • Nine’s most tangible contributions to Sirona’s strategy are its FDA-cleared AI models: NinesMeasure (chest CT-based lung nodule measurements) and NinesAI Emergent Triage (head CT-based intracranial hemorrhage and mass effect triage). The AI models will be integrated into the RadOS platform, bolstering Sirona’s strategy to allow truly-integrated AI workflows. 
  • Nine’s personnel might have the most immediate impact at Sirona, given the value/scarcity of experienced imaging software engineers and the fact that Nines’ product team arguably has more hands-on experience with radiologist workflows than any other imaging AI firm (at least AI firms available for acquisition).
  • Nine’s other AI and imaging workflow assets should also help support Sirona’s future RadOS and AI development, although it’s harder to assess their impact for now.

The AI Shakeup Angle – This acquisition has largely been covered as another example of 2022’s AI shakeup, which isn’t too surprising given how active this year has been (MaxQ’s shutdown, RadNet’s Aidence/Quantib acquisitions, IBM shedding Watson Health). However, Nines’ strategy to combine a telerad practice with in-house AI development was quite unique and its decision to sell might say more about its specific business model (at its scale) than it does about the overall AI market.

The Takeaway

Since the day Sirona emerged from stealth, it’s done a masterful job articulating its mission to solve radiology’s workflow problems by unifying its IT infrastructure. Acquiring Nines’ AI assets certainly supports Sirona’s unified platform messaging, while giving it more technology and personnel resources to try to turn that message into a reality.

Meanwhile, Nines becomes the latest of surely many imaging AI startups to be acquired, pivoted, or shut down, as AI adoption evolves at a slower pace than some VC runways. Nines’ strategy was really interesting, they had some big-name founders and advisors, and now their work and teams will live on through Sirona.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!