Radiology’s Nonphysician Service Expansion

A new Harvey L. Neiman study showed that the recent expansion of nonphysician practitioners (NPPs) across US radiology practices coincided with similar increases in NPP-billed services — services that have traditionally been performed and billed by radiologists.

The Study – Researchers reviewed 2017-2019 data for Medicare claims-submitting nurse practitioners and physician assistants (together “NPPs”) who were employed by US radiology practices, finding that:

  • The number of radiology-employed NPPs who submitted claims increased by 16.3% between 2017 and 2019 (523 to 608 NPPs), while the number of US radiology practices that employed claims-submitting NPPs jumped by 14.3% (196 to 224 practices)
  • This NPP service expansion was driven by clinical evaluation and management services (E&M; +7.6% to 354), invasive procedures (+18.3% to 458), and image interpretation services (+31.8% to 112).
  • Meanwhile, total NPP wRVUs increased by 17.3%, similarly driven by E&M services (+40% to 111k wRVUs), invasive procedures (+5.6% to 189k), and image interpretation (+74% to 8,850 wRVUs)
  • Some radiologists might be concerned that image interpretation saw the greatest NPP headcount and wRVU growth (see +31.8% & +74% stats above), although imaging only represented a small share of overall NPP wRVUs (2.9% in 2019), and 86.7% of NPP-submitted imaging services were for either DEXA scans or swallowing studies. 

The Takeaway

Although roughly 87% of radiology practices still don’t employ NPPs who submit Medicare claims (as of 2019 anyway), this study reveals a clear trend towards NPPs performing more billable procedures — including image interpretation. 

Given previous evidence of NPPs’ growing employment in radiology practices and the major role NPPs play within other specialties, this trend is very likely to continue, leading to more blended radiology teams and more radiologist concerns about the NPP ‘slippery slope.’

AI Crosses the Chasm

Despite plenty of challenges, Signify Research forecasts that the global imaging AI market will nearly quadruple by 2026, as AI “crosses the chasm” towards widespread adoption. Here’s how Signify sees that transition happening:

Market Growth – After generating global revenues of around $375M in 2020 and $400M and 2021, Signify expects the imaging AI market to maintain a massive 27.6% CAGR through 2026 when it reaches nearly $1.4B. 

Product-Led Growth – This growth will be partially driven by the availability of new and more-effective AI products, following:

  • An influx of new regulatory-approved solutions
  • Continued improvements to current products (e.g. adding triage to detection tools)
  • AI leaders expanding into new clinical segments
  • AI’s evolution from point solutions to comprehensive solutions/workflows
  • The continued adoption AI platforms/marketplaces

The Big Four – Imaging AI’s top four clinical segments (breast, cardiology, neurology, pulmonology) represented 87% of the AI market in 2021, and those segments will continue to dominate through 2026. 

VC Support – After investing $3.47B in AI startups between 2015 and 2021, Signify expects that VCs will remain a market growth driver, while their funding continues to shift toward later stage rounds. 

Remaining Barriers – AI still faces plenty of barriers, including limited reimbursements, insufficient economic/ROI evidence, stricter regulatory standards (especially in EU), and uncertain future prioritization from healthcare providers and imaging IT vendors. 

The Takeaway

2022 has been a tumultuous year for AI, bringing a number of notable achievements (increased adoption, improving products, new reimbursements, more clinical evidence, big funding rounds) that sometimes seemed to be overshadowed by AI’s challenges (difficult funding climate, market consolidation, slower adoption than previously hoped).  

However, Signify’s latest research suggests that 2022’s ups-and-downs might prove to be part of AI’s path towards mainstream adoption. And based on the steeper growth Signify forecasts for 2025-2026 (see chart above), the imaging AI market’s growth rate and overall value should become far greater after it finally “crosses the chasm.”

Intelerad’s Reporting Play

Intelerad continued its M&A streak, acquiring radiology reporting company, PenRad Technologies, in a relatively small deal that might have a much bigger impact than some think.

PenRad has a solid share of the breast and lung cancer screening reporting segments, making it a target of a number of PACS vendors in recent years.

The acquisition is another example of Intelerad using its private equity backing to complete its informatics portfolio, following a series of deals that allowed its expansions into new clinical areas (cardiac, OB/GYN), regions (UK), technologies (cloud), and functionalities (image sharing, cloud VNA).

Adding PenRad will immediately give Intelerad three proven cancer screening reporting solutions to offer to its PACS customers, while bringing Intelerad into an untold number of PenRad accounts that it didn’t work with before now. 

The deal’s long-term impact will likely be dictated by how well Intelerad integrates and enhances its new PenRad technologies. If Intelerad is able to seamlessly integrate its PACS/worklist with PenRad’s dictation/reporting, it could create a truly unique advantage — especially if Intelerad expands its reporting capabilities beyond just cancer screening. 

Intelerad’s PenRad acquisition and Sirona’s unified radiology platform also highlight the differentiating role that integrated reporting might play in future enterprise imaging portfolios, although there aren’t many more reporting companies still available for acquisition.

The Takeaway

Informatics veterans might point out that it’s much easier to acquire a portfolio of companies than it is to integrate all that software — and they’d be correct. That said, most would also agree that Intelerad has assembled a uniquely comprehensive enterprise imaging portfolio and it would be extremely well-positioned if/when that portfolio becomes fully integrated.

RadNet’s Bellwether Briefing

RadNet’s investor briefings have come to serve as a medical imaging industry bellwether, and last week’s Q2 call lived up to that reputation, providing key insights into how RadNet is approaching imaging’s biggest trends (and by proxy, where its hospital partners, competitors, and vendors are also likely focusing).

Here are some of the big takeaways…

Hospital system joint ventures remain core to RadNet’s strategy, as payors’ outpatient emphasis helped RadNet expand hospital JV agreements to 29% of its imaging centers (vs. 25% in 2021). It’s targeting 50% in the next 2-3 years.

RadNet acquired three imaging centers in 2022, but much of its short-term imaging center growth will likely come from the 15 net new locations that are under construction.

A long list of headwinds (reimbursements cuts, labor shortages, access to capital, recession) could lead to greater imaging center market consolidation, and RadNet believes it’s better equipped to take advantage of a downturn than its competitors. 

RadNet forecasts that the tight imaging center labor market is “here to stay” and “needs to be addressed with technology.” Following that advice, RadNet highlighted its efficiency-focused moves to adopt MRI DLIR software, launch a remote MRI technologist management solution, and transition its eRAD PACS to the cloud.

RadNet’s AI strategy remains focused on cancer detection / diagnosis leadership and it still views AI extremely optimistically, although the briefing served as a helpful reminder of how early we are in AI’s evolution:

  • Q2 AI revenues reached just $1.5M (that’s including Aidence & Quantib), while heavy investments led to a -$5.9M pre-tax loss for the AI division.
  • RadNet is rolling-out its DeepHealth mammography AI solution through Q4 2022 or Q1 2023, calling the implementation’s installation and training requirements a “tall task” (and they developed it…).
  • Nonetheless, RadNet is confident that the mammo AI solution will deliver immediate benefits to its team’s accuracy, productivity (up to 15-20%), and imaging center scan volumes.
  • RadNet also installed its ` prostate MRI solution at select imaging centers that perform prostate cancer screening, although its overall prostate and lung cancer AI adoption will come later.

The Takeaway

The main takeaway from RadNet’s Q2 call likely depends on your role within imaging. That said, its statements and activities certainly suggest that the major imaging center companies will get larger and more JV-centric, there’s still plenty of reasons to be optimistic about AI (and to be patient with it), and the demand for technologies that solve imaging’s efficiency problems continues to grow.

A Case for VERDICT MRI

A new Radiology Journal study showed that VERDICT MRI-based analysis could significantly improve prostate cancer lesion characterization, and might solve PCa screening’s unnecessary biopsy problem.

Before we jump into the study… VERDICT MRI (Vascular, Extracellular, and Restricted Diffusion for Cytometry in Tumor) is a novel diffusion MRI modeling technique that estimates microstructural tissue properties, and has shown promise for cancer diagnosis and assessments. It can also be performed using standard 3T MRI exams.

The UK-based researchers had 165 men with suspected prostate cancer undergo mpMRI and VERDICT MRI (73 later confirmed w/ significant PCa). Over the 3.5yr study, they found that VERDICT MRI-based ‘lesion fractional intracellular’ volumes (FICs) have significant characterization advantages versus mpMRI-based apparent diffusion coefficient and PSA density measurements (ADC & PSAD):

  • VERDICT MRI-based FICs classified clinically significant prostate cancer lesions far more accurately than ADC and PSAD (AUCs: 0.96 vs. 0.85 & 0.74). 
  • VERDICT-based FICs also clearly differentiated clinically insignificant and significant prostate cancer among the study’s Likert 3 lesions (median FICs: 0.53 & 0.18) and Likert 4 lesions (median FICs: 0.60 & 0.28), while ADC and PSAD measurements couldn’t be used to show which of these lesions would be cancerous. 

The Takeaway

Noting that up to 50% of men with positive PI-RADS scores or >3 Likert scores end up with negative biopsy results, these findings suggest that VERDICT MRI could reduce unnecessary prostate biopsies by a whopping 90%.

That makes this study a “massive leap forward” for prostate cancer diagnostics, and provides enough evidence to make VERDICT MRI just one successful large multi-center trial away from clinical adoption.

Prostate MR AI’s Experience Boost

A new European Radiology study showed that Siemens Healthineers’ AI-RAD Companion Prostate MR solution can improve radiologists’ lesion assessment accuracy (especially less-experienced rads), while reducing reading times and lesion grading variability. 

The researchers had four radiologists (two experienced, two inexperienced) assess lesions in 172 prostate MRI exams, with and without AI support, finding that AI-RAD Companion Prostate MR improved:

  • The less-experienced radiologists’ performance, significantly (AUCs: 0.66 to 0.80 & 0.68 to 0.80)
  • The experienced rads’ performance, modestly (AUCs: 0.81 to 0.86 & 0.81 to 0.84)
  • Overall PI-RADS category and Gleason score correlations (r = 0.45 to 0.57)
  • Median reading times (157 to 150 seconds)

The study also highlights Siemens Healthineers’ emergence as an AI research leader, leveraging its relationship / funding advantages over AI-only vendors and its (potentially) greater focus on AI research than its OEM peers to become one of imaging AI’s most-published vendors (here are some of its other recent studies).

The Takeaway

Given the role that experience plays in radiologists’ prostate MRI accuracy, and noting prostate MRI’s historical challenges with variability, this study makes a solid case for AI-RAD Companion Prostate MR’s ability to improve rads’ diagnostic performance (without slowing them down). It’s also a reminder that Siemens Healthineers is serious about supporting its homegrown AI portfolio through academic research.

RevealDx & contextflow’s Lung CT Alliance

RevealDx and contextflow announced a new alliance that should advance the companies’ product and distribution strategies, and appears to highlight an interesting trend towards more comprehensive AI solutions.

The companies will integrate RevealDx’s RevealAI-Lung solution (lung nodule characterization) with contextflow’s SEARCH Lung CT software (lung nodule detection and quantification), creating a uniquely comprehensive lung cancer screening offering. 

contextflow will also become RevealDx’s exclusive distributor in Europe, adding to RevealDx’s global channel that includes a distribution alliance with Volpara (exclusive in Australia/NZ, non-exclusive in US) and a platform integration deal with Sirona

The alliance highlights contextflow’s new partner-driven strategy to expand SEARCH Lung CT beyond its image-based retrieval roots, coming just a few weeks after announcing an integration with Oxipit’s ChestEye Quality AI solution to identify missed lung nodules.

In fact, contextflow’s AI expansion efforts appear to be part of an emerging trend, as AI vendors work to support multiple steps within a given clinical activity (e.g. lung cancer assessments) or spot a wider range of pathologies in a given exam (e.g. CXRs):

  • Volpara has amassed a range of complementary breast cancer screening solutions, and has started to build out a similar suite of lung cancer screening solutions (including RevealDx & Riverain).
  • A growing field of chest X-ray AI vendors (Annalise.ai, Lunit, Qure.ai, Oxipit, Vuno) lead with their ability to detect multiple findings from a single CXR scan and AI workflow. 
  • Siemens Healthineers’ AI-RAD Companion Chest CT solution combines these two approaches, automating multiple diagnostic tasks (analysis, quantification, visualization, results generation) across a range of different chest CT exams and organs.

The Takeaway

contextflow and RevealDx’s European alliance seems to make a lot of sense, allowing contextflow to enhance its lung nodule detection/quantification findings with characterization details, while giving RevealDx the channel and lung nodule detection starting points that it likely needs.

The partnership also appears to represent another step towards more comprehensive and potentially more clinically valuable AI solutions, and away from the narrow applications that have dominated AI portfolios (and AI critiques) before now.

HM-MRI Beats mpMRI

University of Chicago researchers provided solid evidence that hybrid multidimensional MRI (HM-MRI) might be superior to multiparametric MRI (mpMRI) for diagnosing clinically significant prostate cancer.

That’s a big statement after nearly two decades of prostate MRI exams, but mpMRI’s continued variability challenges still leave room for improvement, and some believe HM-MRI’s quantitative approach could help add objectivity.

To test that theory, the researchers had four radiologists with different career experience (1 to 20yrs) interpret HM-MRI and mpMRI exams from 61 men with biopsy-confirmed prostate cancer, finding that the HM-MRI exams produced:

  • Higher AUCs among three of the four readers (0.61 vs. 0.66; 0.71 vs. 0.60; 0.59 vs. 0.50; 0.64 vs. 0.46), with the least experienced rad achieving the greatest AUC improvement 
  • Higher specificity among all four readers (48% vs. 37%; 78% vs. 26%; 48% vs. 0%; 46% vs. 7%)
  • Significantly greater interobserver agreement rates (Cronbach alpha: 0.88 vs. 0.26; >0.60 indicates reliability)
  • Far shorter average interpretation times (73 vs. 254 seconds)

The Takeaway
As the study’s editorial put it, HM-MRI appears to be a “quantitative step in the right direction” for prostate MRI, and has the potential to address mpMRI’s variability, accuracy, and efficiency challenges.

Exo Acquires Medo AI

Exo took a big step towards making its handheld ultrasounds easier to use and adopt, acquiring AI startup Medo AI. Although unexpected, this is a logical and potentially significant acquisition that deserves a deeper look…

Exo plans to integrate Medo’s Sweep AI technology into its ultrasound platform, forecasting that this hardware-software combination will streamline Exo POCUS adoption among clinicians who lack ultrasound training/experience. 

  • Medo’s automated image acquisition and interpretation software has clearance for two exams (thyroid nodule assessments, developmental hip dysplasia screening), and it has more AI modules in development. 

Exo didn’t disclose acquisition costs, but Medo AI is relatively modest in size (23 employees on LinkedIn, no public info on VC rounds) and it’s unclear if it had any other bidders.

  • Either way, Exo can probably afford it following its $220M Series C in July 2021 (total funding now >$320m), especially considering that Medo’s use case directly supports Exo’s core strategy of expanding POCUS to more clinicians.

Some might point out how this acquisition continues 2022’s AI shakeup, which brought three other AI acquisitions (Aidence & Quantib by RadNet; Nines by Sirona) and at least two strategic pivots (MaxQ AI & Kheiron). 

  • That said, this is the first AI acquisition by a hardware vendor and it doesn’t represent the type of segment consolidation that everyone keeps forecasting.

Exo’s Medo acquisition does introduce a potential shift in the way handheld ultrasound vendors might approach expanding their AI software stack, after historically focusing on a mix of partnerships and in-house development. 

The Takeaway

Handheld ultrasound is perhaps the only medical imaging product segment that includes an even mix of the industry’s largest OEMs and extremely well-funded startups, setting the stage for fierce competition. 

That competition is even stronger when you consider that the handheld ultrasound segment’s primary market (point-of-care clinicians) is still early in its adoption curve, which places a big target on any products that could make handheld ultrasounds easier to use and adopt (like Medo AI).

Echo AI COVID Predictions

A new JASE study showed that AI-based echocardiography measurements can be used to predict COVID patient mortality, but manual measurements performed by echo experts can’t. This could be seen as yet another “AI beats humans” study (or yet another COVID AI study), but it also gives important evidence of AI’s potential to reduce echo measurement variability.

Starting with transthoracic echocardiograms from 870 hospitalized COVID patients (13 hospitals, 9 countries, 27.4% who later died), the researchers utilized Ultromics’ EchoGo Core AI solution and a team of expert readers to measure left ventricular ejection fraction (LVEF) and LV longitudinal strain (LVLS). They then analyzed the measurements and applied them to mortality prediction models, finding that the AI-based measurements:

  • Were “significant predictors” of patient mortality (LVEF: OR=0.974, p=0.003; LVLS: OR=1.060, p=0.004), while the manual measurements couldn’t be used to predict mortality
  • Had significantly less variability than the experts’ manual measurements
  • Were similarly “feasible” as manual measurements when applied to the various echo exams
  • Showed stronger correlations with other COVID biomarkers (e.g. diastolic blood pressure)
  • Combined with other biomarkers to produce even more accurate mortality predictions

The authors didn’t seem too surprised that AI measurements had less variability, or by their conclusion that reducing measurement variability “consequently increased the statistical power to predict mortality.”

They also found that sonographers’ original scanning inconsistency was responsible for nearly half of the experts’ measurement variability, suggesting that a combination of echo guidance AI software (e.g. Caption or UltraSight) with echo reporting AI tools (e.g. Us2.ai or Ultromics) could “further reduce variability.”

The Takeaway

Echo AI measurements aren’t about to become a go-to COVID mortality biomarker (clinical factors and comorbidities are much stronger predictors), but this study makes a strong case for echo AI’s measurement consistency advantage. It’s also a reminder that reducing variability improves overall accuracy, which would be valuable for sophisticated prediction models or everyday echocardiography operations.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!