Optellum’s NTAPC

Optellum joined the small group of imaging AI vendors who are on a path towards reimbursements, representing a major milestone for Optellum and another sign of progress for the business of imaging AI.

With Optellum’s “New Technology Ambulatory Payment Classification” (NTAPC), providers who use the Optellum Lung Cancer Prediction solution with Medicare patients can bill CMS $600-$700 for each use (CPT: 0721T).

Physicians would use Optellum LCP to analyze a Medicare patient’s CT scan, leveraging Optellum’s pulmonary nodule risk scores to support their decision whether to refer the patient to a pulmonologist. Then they would bill CMS for reimbursement.

However, like previous NTAPCs, this is just the first step in Optellum’s path towards full reimbursement coverage:

  • Regional Medicare Administrative Contractors will initially decide whether to reimburse on a case-by-case basis (and can decline reimbursements)
  • A similar process will happen with private plans
  • Reimbursements would only be nationally required once Optellum LCP is covered by each of the 12 MAC geographies and all commercial payors

Although not guaranteed, Optellum’s CMS-defined reimbursement rates/process represents a solid first step, especially considering that Perspectum and HeartFlow’s previous NTAPCs led to widespread coverage.

Optellum’s NTAPC also continues imaging AI’s overall progress towards reimbursements. Within the last two years, Viz.ai and Caption Health scored the first AI NTAPs (guaranteed add-on payments, but temporary) and startups like Nanox AI, Koios, and Perspectum landed AI’s first CPT III codes (reimbursements not guaranteed, but data collected for future reimbursement decisions). 

The Takeaway
Although reimbursements are still elusive for most AI vendors and not even guaranteed for most AI products that already have billing codes, it’s clear that we’re seeing more progress towards AI reimbursements. That’s good news for AI vendors, since it’s pretty much proven that reimbursements drive AI adoption and are necessary to show ROI for many AI products.

AI Experiences & Expectations

The European Society of Radiology just published new insights into how imaging AI is being used across Europe and how the region’s radiologists view this emerging technology.

The Survey – The ESR reached out to 27,700 European radiologists in January 2022 with a survey regarding their experiences and perspectives on imaging AI, receiving responses from just 690 rads.

Early Adopters – 276 the 690 respondents (40%) had clinical experience using imaging AI, with the majority of these AI users:

  • Working at academic and regional hospitals (52% & 37% – only 11% at practices)
  • Leveraging AI for interpretation support, case prioritization, and post-processing (51.5%, 40%, 28.6%)

AI Experiences – The radiologists who do use AI revealed a mix of positive and negative experiences:

  • Most found diagnostic AI’s output reliable (75.7%)
  • Few experienced technical difficulties integrating AI into their workflow (17.8%)
  • The majority found AI prioritization tools to be “very helpful” or “moderately helpful” for reducing staff workload (23.4% & 62.2%)
  • However, far fewer reported that diagnostic AI tools reduced staff workload (22.7% Yes, 69.8% No)

Adoption Barriers – Most coverage of this study will likely focus on the fact that only 92 of the surveyed rads (13.3%) plan to acquire AI in the future, while 363 don’t intend to acquire AI (52.6%). The radiologists who don’t plan to adopt AI (including those who’ve never used AI) based their opinions on:

  • AI’s lack of added value (44.4%)
  • AI not performing as well as advertised (26.4%)
  • AI adding too much work (22.9%)
  • And “no reason” (6.3%)

US Context – These results are in the same ballpark as the ACR’s 2020 US-based survey (33.5% using AI, only 20% of non-users planned to adopt within 5 years), although 2020 feels like a long time ago.

The Takeaway

Even if this ESR survey might leave you asking more questions (What about AI’s impact on patient care? How often is AI actually being used? How do opinions differ between AI users and non-users?), more than anything it confirms what many of us already know… We’re still very early in AI’s evolution, and there’s still plenty of performance and perception barriers that AI has to overcome.

Chest CT AI Efficiency

A new AJR study out of the Medical University of South Carolina showed that Siemens Healthineers’ AI-RAD Companion Chest CT solution significantly reduced radiologists’ interpretation times. Considering that radiologist efficiency is often sacrificed in order to achieve AI’s accuracy and prioritization benefits, this study is worth a deeper look.

MUSC integrated Siemens’ AI-RAD Companion Chest CT into their PACS workflow, providing its radiologists with automated image analysis, quantification, visualization, and results for several key chest CT exams.

Three cardiothoracic radiologists were randomly assigned chest CT exams from 390 patients (195 w/ AI support), finding that the average AI-supported interpretations were significantly faster. . .

  • For the combined readers – 328 vs. 421 seconds 
  • For each individual radiologist – 289 vs. 344; 449 vs. 649; 281 vs. 348 seconds
  • For contrast-enhanced scans – 20% faster
  • For non-contrast scans – 24.2% faster
  • For negative scans – 26.4% faster
  • For positive scans without significant new findings – 25.7% faster
  • For positive scans with significant new findings – 20.4% faster

Overall, the solution allowed a 22.1% average reduction in radiologist interpretation times, or an hour per typical workday.

The authors didn’t explore the solution’s impact on radiologist accuracy, noting that AI accuracy has already been covered in plenty of previous studies. In fact, members of this same MUSC research team previously showed that AI-RAD Companion Chest CT identified abnormalities more accurately than many of its radiologists.

The Takeaway

Out of the hundreds of AI studies we see each year, very few have tried to measure efficiency gains and even fewer have shown that AI actually reduces radiologist interpretation times.
Given the massive exam volumes that radiologists are facing and the crucial role efficiency plays in AI ROI calculations, these results are particularly encouraging, and suggest that AI can indeed improve both accuracy and efficiency.

Burdenless Incidental AI

A team of IBM Watson Health researchers developed an interesting image and text-based AI system that could significantly improve incidental lung nodule detection, without being “overly burdensome” for radiologists. That seems like a clinical and workflow win-win for any incidental AI system, and makes this study worth a deeper look.

Watson Health’s R&D-stage AI system automatically detects potential lung nodules in chest and abdominal CTs, and then analyzes the text in corresponding radiology reports to confirm whether they mention lung nodules. In clinical practice, the system would flag exams with potentially missed nodules for radiologist review.

The researchers used the AI system to analyze 32k CTs sourced from three health systems in the US and UK. They then had radiologists review the 415 studies that the AI system flagged for potentially missed pulmonary nodules, finding that it:

  • Caught 100 exams containing at least one missed nodule
  • Flagged 315 exams that didn’t feature nodules (false positives)
  • Achieved a 24% overall positive predictive value
  • Produced just a 1% false positive rate

The AI system’s combined ability to detect missed pulmonology nodules while “minimizing” radiologists’ re-reading labor was enough to make the authors optimistic about this type of AI. They specifically suggested that it could be a valuable addition to Quality Assurance programs, improving patient care while avoiding the healthcare and litigation costs that can come from missed findings.

The Takeaway

Watson Health’s new AI system adds to incidental AI’s growing momentum, joining a number of research and clinical-stage solutions that emerged in the last two years. However, this system’s ability to cross-reference radiology report text and apparent ability to minimize false positives are relatively unique. 

Even if most incidental AI tools aren’t ready for everyday clinical use, and their potential to increase re-read labor might be alarming to some rads, these solutions’ ability to catch earlier stage diseases and minimize the impact of diagnostic “misses” could earn the attention of a wide range of healthcare stakeholders going forward.

Autonomous & Ultrafast Breast MRI

A new study out of the University of Groningen highlighted the scanning and diagnostic efficiency advantages that might come from combining ultrafast breast MRI with autonomous AI. That might make some readers uncomfortable, but the fact that autonomous AI is one of 2022’s most controversial topics makes this study worth some extra attention.

The researchers used 837 “TWIST” ultrafast breast MRI exams from 488 patients (118 abnormal breasts, 34 w/ malignant lesions) to train and validate a deep learning model to detect and automatically exclude normal exams from radiologist workloads. They then tested it against 178 exams from 149 patients from the same institution (55 abnormal, 30 w/ malignant lesions), achieving a 0.81 AUC.

When evaluated at a conservative 0.25 detection error threshold, the DL model:

  • Achieved 98% sensitivity and negative predictive values
  • Misclassified one abnormal exam as normal (out of 55)
  • Correctly classified all exams with malignant lesions
  • Would have reduced radiologists’ exam workload by 6.2% (-15.7% at breast level)

When evaluated at a 0.37 detection error threshold, the model:

  • Achieved 95% sensitivity and a 97% negative predictive value (still high)
  • Misclassified three abnormal exams (3 of 55), including one malignant lesion
  • Would have reduced radiologists’ exam workload by 15.7% (-30.6% at breast level)

These radiologist workflow improvements would complement the TWIST ultrafast MRI sequence’s far shorter magnet time than current protocols (2 vs. 20 minutes), while the DL model could further reduce scan times by automatically ending exams once they are flagged as normal. 

The Takeaway

Even if the world might not be ready for this type of autonomous AI workflow, this study is a good example of how abbreviated MRI protocols and AI could be able to improve both imaging team and radiologist efficiency. It’s also the latest in a series of studies exploring how AI could exclude normal scans from radiologist workflows, suggesting that the development and design of this type of autonomous AI will continue to mature.

Automating Stress Echo

A new JACC study showed that Ultromics’ EchoGo Pro AI solution can accurately classify stress echocardiograms, while improving clinician performance with a particularly challenging and operator-dependent exam. 

The researchers used EchoGo Pro to independently analyze 154 stress echo studies, leveraging the solution’s 31 image features to identify patients with severe coronary artery disease with a 0.927 AUC (84.4% sensitivity; 92.7% specificity). 

EchoGo Pro maintained similar performance with a version of the test dataset that excluded the 38 patients with known coronary artery disease or resting wall motion abnormalities (90.5% sensitivity; 88.4% specificity).

The researchers then had four physicians with different levels of stress echo experience analyze the same 154 studies with and without AI support, finding that the EchoGo Pro reports:

  • Improved the readers’ average AUC – 0.877 vs. 0.931
  • Increased their mean sensitivity – 85% vs. 95%
  • Didn’t hurt their specificity – 83.6% vs. 85%
  • Increased their number of confident reads – 440 vs. 483
  • Reduced their number of non-confident reads – 152 vs. 109
  • Improved their diagnostic agreement rates – 0.68-0.79 vs. 0.83-0.97

The Takeaway

Ultromics’ stress echo reports improved the physicians’ interpretation accuracy, confidence, and reproducibility, without increasing false positives. That list of improvements satisfies most of the requirements clinicians have for AI (in addition to speed/efficiency), and it represents another solid example of echo AI’s real-world potential.

Imaging AI’s Unseen Potential

Amid the dozens of imaging AI papers and presentations that came out over the last few weeks were three compelling new studies highlighting how much “unseen” information AI can extract from medical images, and the massive impact this information could have. 

Imaging-Led Population Health – An excellent presentation from Ayis Pyrros, MD placed radiology at the center of healthcare’s transition to value-based care and population health, highlighting the AI training opportunities that will come with more value-based care HCC codes and imaging AI’s untapped potential for early disease detection and management. Dr. Pyrros specifically emphasized chest X-ray’s potential given the exam’s ubiquity (26M Medicare CXRs in 2021), CXR AI’s ability to predict outcomes (e.g. mortality, comorbidities, hospital stays), and how opportunistic AI screening can/should support proactive care that benefits both patients and health systems.

  • Healthcare’s value-based overhaul has traditionally been seen as a threat to radiology’s fee-for-service foundations. Even if that might still be true from a business model perspective, Dr. Pyrros makes it quite clear that the shift to value-based care could make radiology even more important — and importance is always good for business.

AI Race Detection – The final peer-reviewed version of the landmark study showing that AI models can accurately predict patient race was officially published, further confirming that AI can detect patients’ self-reported race by analyzing medical image features. The new paper showed that AI very accurately detects patient race across modalities and anatomical regions (AUCs: CXRs 0.91 – 0.99, chest CT 0.89 – 0.96, mammography 0.81), without relying on proxies or imaging-related confounding features (BMI, disease distribution, and breast density all had ≤0.61 AUCs).

  • If imaging AI models intended for clinical tasks can identify patients’ races, they could be applying the same racial biomarkers to diagnosis, thus reproducing or exacerbating healthcare’s existing racial disparities. That’s an important takeaway whether you’re developing or adopting AI.

CXR Cost Predictions – The smart folks at the UCSF Center for Intelligent Imaging developed a series of CXR-based deep learning models that can predict patients’ future healthcare costs. Developed with 21,872 frontal CXRs from 19,524 patients, the best performing models were able to relatively accurately identify which patients would have a top-50% personal healthcare cost after one, three, and five years (AUCs: 0.806, 0.771, 0.729). 

  • Although predicting which patients will have higher costs could be useful on its own, these findings also suggest that similar CXR-based DL models could be used to flag patients who may deteriorate, initiate proactive care, or support healthcare cost analysis and policies.

AI-Assisted Radiographers

A new European Radiology study provided what might be the first insights into whether AI can allow radiographers to independently read lung cancer screening exams, while alleviating the resource challenges that have slowed LDCT screening program rollouts.

This is the type of study that makes some radiologists uncomfortable, but its results suggest that rads’ role in lung cancer screening remains very secure.

The researchers had two trained UK-based radiographers read 716 LDCT exams using a computer-assisted detection AI solution (158 w/ significant pulmonary nodules), and compared them with interpretations from radiologists who didn’t have CADe assistance.

The radiographers had significantly lower sensitivity than the radiologists (68% & 73.7%; p < 0.001), leading to 61 false negative exams. However, the two CADe-assisted radiographers did achieve:

  • Good sensitivity with cancers confirmed from baseline scans – 83.3% & 100%
  • Relatively high specificity – 92.1% & 92.7%
  • Low false-positive rates – 7.9% and 7.3%

The CADe AI solution might have both helped and hurt the radiographers’ performance, as CADe missed 20 of the radiographers’ 40 false negative nodules, and four of their seven false negative malignant nodules. 

Even as LDCT CADe tools become far more accurate, they might not be able to fill in radiographers’ incidental findings knowledge gap. The radiographers achieved either “good” or “fair” interobserver agreement rates with radiologists for emphysema and CAC findings, but the variety of other incidental pathologies was “too broad to reasonably expect radiographers to detect and interpret.”

The Takeaway
Although CADe-assisted radiographer studies might concern some radiologists, this seems like an important aspect of AI to understand given the workload demands that come with lung cancer screening programs, and the need to better understand how clinicians and AI can work together. 

Good thing for any concerned radiologists, this study shows that LDCT reporting is too complex and current CADe solutions are too limited for CADe-equipped radiographers to independently read LDCTs… “at least for the foreseeable future.”

Who Owns LVO AI?

The FDA’s public “reminder” that studies analyzed by AI-based LVO detection tools (CADt) still require radiologist interpretation became one of hottest stories in radiology last week, and although many of us didn’t realize, it made a big statement about how AI-based coordination is changing the way care teams and radiologists work together.

The FDA decided to issue this clarification after finding that some providers were using LVO AI tools to guide their stroke treatment decisions and “might not be aware” that they need to base those decisions on radiologist interpretations. The agency reiterated that these tools are only intended to flag suspicious exams and support diagnostic prioritization, revealing plans to work with LVO AI vendors to make sure everyone understands radiologists’ role in these workflows. 

This story was covered in all the major radiology publications and sparked a number of social media discussions with some interesting theories:

  • One social post suggested that the FDA is preemptively taking a stand against autonomous AI
  • Some posts and articles wondered if AI might be overly-influencing radiologists’ diagnoses
  • The Imaging Wire didn’t even mention care coordination until a reader emailed with a clarification and we went back and edited our initial story

That reader had a point. It does seem like this is more of a care coordination issue than an AI diagnostics issue, considering that:

  • These tools send notifications and images to interventionalist/surgeons before radiologists are able to read the same cases
  • Two of the three leading LVO AI care coordination tools are marketed to everyone on the stroke team except radiologists (go check their sites)
  • Before AI care coordination came along, it would have been hard to believe that stroke team members “might not be aware” that they needed to check radiologist interpretations before making care decisions

The Takeaway

LVO AI care coordination tools have been a huge commercial and clinical success, and care coordination platforms are quickly expanding to new use cases.

That seems like good news for emergency patients and care teams, but as the FDA reminded us last week, it also means that we’re going to need more safeguards to ensure that care decisions are based on radiologists’ diagnoses — even if the AI tool already informed care teams what the diagnosis might be.

Us2.ai Automates Globally

One of imaging AI’s hottest segments just got even hotter with the completion of Us2.ai’s $15M Series A round and the global launch of its flagship echocardiography AI solution. It’s been at least a year since we led-off a newsletter with a funding announcement, but Us2.ai’s unique foundation and the echo AI segment’s rapid evolution makes this a story worth telling…

Us2.ai has already achieved FDA clearance, a growing list of clinical evidence, and key hardware and pharma alliances (EchoNous & AstraZeneca). 

  • The Singapore-based startup also has a unique level of credibility, including co-founders with a history of clinical and business success, and VC support from IHH Healthcare (the world’s second largest health system).
  • With its funding secured, Us2.ai will accelerate its commercial and regulatory expansion, with a focus on driving global clinical adoption (US, Europe, and Asia) and developing new alliances (hardware vendors, healthcare providers, researchers, pharma).

Us2.ai joins a crowded echo AI arena, which might have more commercial-stage vendors than all other ultrasound AI segments combined. In fact, the major echo guidance (Caption Health, UltraSight) and echo reporting (DiA Imaging, Ultromics, Us2.ai) AI startups have already generated more than $180M in combined VC funding and forged a number of major hardware and PACS partnerships.

  • This influx of echo AI startups might be warranted, given echocardiography’s workforce, efficiency, and variability challenges. It might even prove to be visionary if handheld ultrasounds continue their rapid expansion to new users and settings (including primary and at-home care).
  • Us2.ai will have to rely on its reporting advantages to stand out against its well-established competitors, as it is the only vendor to completely automate echo reporting (complete editable/explainable reports in 2 minutes) and analyze every chamber of the heart (vs. just left ventricle with some vendors). 
  • That said, the incumbent echo AI players have big head starts and the impact of Us2.ai’s automation advantage will rely on ultrasound OEMs’ openness to new alliances and (of course) the rate that providers embrace AI for echo reporting.

The Takeaway

Even if many cardiologists and sonographers would have a hard time differentiating the various echo AI solutions, this is a segment that’s showing a high level of product-market fit. That’s more than you can say for most imaging AI segments, and product advancements like Us2.ai’s should improve this alignment. It might even help drive widespread adoption.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!