Radiology’s Smart New Deal

A new Journal of Digital Imaging editorial from UCLA radiology chair Dieter R. Enzmann, MD proposed a complete overhaul of how radiology reports are designed and distributed, in a way that should make sense to radiology outsiders but might make some folks within radiology uncomfortable.

Dr. Enzmann’s “Smart New Deal” proposes that radiology reports and reporting workflows should evolve to primarily support smartphone-based usage for both patients and physicians, ensuring that reports are:

  • Widely accessible 
  • Easily navigated and understood 
  • Built with empathy for current realities (info overload, time scarcity, mobility)
  • And widely utilized… because they are accessible, simple, and understandable

To achieve those goals, Dr. Enzmann proposes a “creative destruction” of our current reporting infrastructure, helped by ongoing improvements in foundational technologies (e.g. cloud, interoperability) and investments from radiology’s tech leaders (or from their future disruptors).

Despite Dr. Enzmann’s impressive credentials, the people of radiology might have a hard time coming to terms with this vision, given that:

  • Radiology reports are mainly intended for referring physicians, and referrers don’t seem to be demanding simplified phone-native reports (yet)
  • This is a big change given how reports are currently formatted and accessed
  • Patient-friendly features that require new labor often face resistance
  • It might make more sense for this smartphone-centric approach to cover patients’ entire healthcare journeys (not just radiology reports)

The Takeaway

It can be hard to envision a future when radiology reports are primarily built for smartphone consumption.

That said, few radiologists or rad vendors would argue against other data-based industries making sure their products (including their newsletters) are accessible, understandable, and actionable. Many might also recognize that some of the hottest imaging segments are already smartphone-native (e.g. AI care coordination solutions, PocketHealth’s imaging sharing, handheld POCUS), while some of the biggest trends in radiology focus on making reports easier for patients and referrers to consume.

Smartphone-first reporting might not be a sure thing, but the trends we’re seeing do suggest that efforts to achieve Dr. Enzmann’s core reporting goals will be rewarded no matter where technology takes us.

Cleerly’s Downstream Effect

A new AJR study showed that Cleerly’s coronary CTA AI solution detects obstructive coronary artery disease (CAD) more accurately than myocardial perfusion imaging (MPI), and could substantially reduce unnecessary invasive angiographies. 

The researchers used Cleerly to analyze Coronary CTAs from 301 patients with stable myocardial ischemia symptoms who also received stress MPI exams. They then compared these Cleerly CCTA and MPI results with the patients’ invasive angiography exams, and quantitative coronary angiography (QCA) and fractional flow reserve (FFR) measurements. 

The Cleerly-based coronary CTA results significantly outperformed MPI for predicting stenosis and caught cases that MPI-based ischemia results didn’t flag:

  • Cleerly AI detected more patients with obstructive stenosis (≥50%; 0.88 vs. 0.66 AUCs)
  • Cleerly AI identified more patients with severe stenosis (≥70%; 0.92 vs. 0.81 AUCs)
  • Cleerly AI detected far more patients with signs of ischemia in FFR (<0.80; 0.90 vs. 0.71 AUCs) 
  • Out of 102 patients with negative MPI ischemia results, Cleerly identified 55 patients with obstructive stenosis and 20 with severe stenosis (54% & 20%)
  • Out of 199 patients with positive MPI ischemia results, Cleerly identified 46 patients with non-obstructive stenosis (23%)

MPI and Cleerly-based CCTA analysis also worked well together. The combination of ≥50% stenosis via Cleerly and ischemia in MPI achieved 95% sensitivity and 63% specificity for detecting serious stenosis (vs. 74% & 43% using QCA measurements).

Based on those results, pathways that use a Cleerly AI-based CCTA benchmark of ≥70% stenosis to approve patients for invasive angiography would reduce invasive angiography utilization by 39%. Meanwhile, workflows requiring a positive MPI ischemia result and CCTA Cleerly AI benchmark of ≥70% would reduce invasive angiography utilization by 49%.

The Takeaway
We’re seeing strong research and policy momentum towards using coronary CTA as the primary CAD diagnosis method and reducing reliance on invasive angiography. This and other recent studies suggest that CCTA AI solutions like Cleerly could play a major role in that CCTA-first shift.

NYU’s Video Reporting Experiment

A new AJR study out of NYU just provided what might be the first significant insights into how patient-friendly video reports might impact radiologists and patients.

Leveraging a new Visage 7 video feature and 3D rendering from Siemens Healthineers, NYU organized a four-month study that encouraged and evaluated patient-centered video reports (w/ simple video + audio explanations). 

During the study period, just 105 out of 227 NYU radiologists created videos, resulting in 3,763 total video reports. The videos were included within NYU’s standard radiology reports and made available via its patient portal.

The video reports added an average of 4 minutes of recording time to radiologists’ workflows (± 2:21), with abnormal reports understandably taking longer than normal reports (5:30 vs. 4:15; still statistically similar). The authors admitted that video creation has to get faster in order to achieve clinical adoption, revealing plans to use standardized voice macros to streamline this process.

Patients viewed just 864 unique video reports, leaving 2,899 videos unviewed. However, when NYU moved the video links above the written section late in the study period, the share of patients who watched their videos jumped from 20% to 40%. Patients who watched the videos also really liked them:

  • Patients scored their overall video report experiences a 4.7 out of 5
  • The videos’ contribution to patients’ diagnostic understanding also scored 4.7 of 5
  • 56% of patients reported reduced anxiety due to the videos (via 1% increased) 
  • 91% of patients preferred video + written reports (vs. 2% w/ written-only)

Although not the videos’ intended audience, referring physicians viewed 214 unique video reports, and anecdotes suggested that the videos helped referrers explain findings to their patients.

The Takeaway

We’ve covered plenty of studies showing that patients want to review their radiology reports, but struggle to understand them. We’ve also seen plenty of suggestions that radiologists want to improve their visibility to patients and highlight their role in patient care.

This study shows that video reports could satisfy both of those needs, while confirming that adopting video reporting wouldn’t require significant infrastructure changes (if your PACS supports video), but they would add four minutes to radiologist reporting workflows.

That doesn’t suggest a major increase in video reporting will come any time soon, especially considering most practices/departments’ focus on efficiency, but it does make future video reporting adoption seem a lot more realistic (or at least possible).

Who Owns LVO AI?

The FDA’s public “reminder” that studies analyzed by AI-based LVO detection tools (CADt) still require radiologist interpretation became one of hottest stories in radiology last week, and although many of us didn’t realize, it made a big statement about how AI-based coordination is changing the way care teams and radiologists work together.

The FDA decided to issue this clarification after finding that some providers were using LVO AI tools to guide their stroke treatment decisions and “might not be aware” that they need to base those decisions on radiologist interpretations. The agency reiterated that these tools are only intended to flag suspicious exams and support diagnostic prioritization, revealing plans to work with LVO AI vendors to make sure everyone understands radiologists’ role in these workflows. 

This story was covered in all the major radiology publications and sparked a number of social media discussions with some interesting theories:

  • One social post suggested that the FDA is preemptively taking a stand against autonomous AI
  • Some posts and articles wondered if AI might be overly-influencing radiologists’ diagnoses
  • The Imaging Wire didn’t even mention care coordination until a reader emailed with a clarification and we went back and edited our initial story

That reader had a point. It does seem like this is more of a care coordination issue than an AI diagnostics issue, considering that:

  • These tools send notifications and images to interventionalist/surgeons before radiologists are able to read the same cases
  • Two of the three leading LVO AI care coordination tools are marketed to everyone on the stroke team except radiologists (go check their sites)
  • Before AI care coordination came along, it would have been hard to believe that stroke team members “might not be aware” that they needed to check radiologist interpretations before making care decisions

The Takeaway

LVO AI care coordination tools have been a huge commercial and clinical success, and care coordination platforms are quickly expanding to new use cases.

That seems like good news for emergency patients and care teams, but as the FDA reminded us last week, it also means that we’re going to need more safeguards to ensure that care decisions are based on radiologists’ diagnoses — even if the AI tool already informed care teams what the diagnosis might be.

Us2.ai Automates Globally

One of imaging AI’s hottest segments just got even hotter with the completion of Us2.ai’s $15M Series A round and the global launch of its flagship echocardiography AI solution. It’s been at least a year since we led-off a newsletter with a funding announcement, but Us2.ai’s unique foundation and the echo AI segment’s rapid evolution makes this a story worth telling…

Us2.ai has already achieved FDA clearance, a growing list of clinical evidence, and key hardware and pharma alliances (EchoNous & AstraZeneca). 

  • The Singapore-based startup also has a unique level of credibility, including co-founders with a history of clinical and business success, and VC support from IHH Healthcare (the world’s second largest health system).
  • With its funding secured, Us2.ai will accelerate its commercial and regulatory expansion, with a focus on driving global clinical adoption (US, Europe, and Asia) and developing new alliances (hardware vendors, healthcare providers, researchers, pharma).

Us2.ai joins a crowded echo AI arena, which might have more commercial-stage vendors than all other ultrasound AI segments combined. In fact, the major echo guidance (Caption Health, UltraSight) and echo reporting (DiA Imaging, Ultromics, Us2.ai) AI startups have already generated more than $180M in combined VC funding and forged a number of major hardware and PACS partnerships.

  • This influx of echo AI startups might be warranted, given echocardiography’s workforce, efficiency, and variability challenges. It might even prove to be visionary if handheld ultrasounds continue their rapid expansion to new users and settings (including primary and at-home care).
  • Us2.ai will have to rely on its reporting advantages to stand out against its well-established competitors, as it is the only vendor to completely automate echo reporting (complete editable/explainable reports in 2 minutes) and analyze every chamber of the heart (vs. just left ventricle with some vendors). 
  • That said, the incumbent echo AI players have big head starts and the impact of Us2.ai’s automation advantage will rely on ultrasound OEMs’ openness to new alliances and (of course) the rate that providers embrace AI for echo reporting.

The Takeaway

Even if many cardiologists and sonographers would have a hard time differentiating the various echo AI solutions, this is a segment that’s showing a high level of product-market fit. That’s more than you can say for most imaging AI segments, and product advancements like Us2.ai’s should improve this alignment. It might even help drive widespread adoption.

The Case for Algorithmic Audits

A new Lancet Digital Health study could have become one of the many “AI rivals radiologists” papers that we see each week, but it instead served as an important lesson that traditional performance tests might not prove that AI models are actually safe for clinical use.

The Model – The team developed their proximal femoral fracture detection DL model using 45.7k frontal X-rays performed at Australia’s Royal Adelaide Hospital (w/ 4,861 fractures).

The Validation – They then tested it against a 4,577-exam internal set (w/ 640 fractures), 400 of which were also interpreted by five radiologists (w/ 200 fractures), and against an 81-image external validation set from Stanford.

The Results – All three tests produced results that a typical study might have viewed as evidence of high-performance: 

  • The model outperformed the five radiologists (0.994 vs. 0.969 AUCs)
  • It beat the best performing radiologist’s sensitivity (95.5% vs. 94.5%) and specificity (99.5% vs 97.5%)
  • It generalized well with the external Stanford data (0.980 AUC)

The Audit – Despite the strong results, a follow-up audit revealed that the model might make some predictions for the wrong reasons, suggesting that it is unsafe for clinical deployment:

  • One false negative X-ray included an extremely displaced fracture that human radiologists would catch
  • X-rays featuring abnormal bones or joints had a 50% false negative rate, far higher than the reader set’s overall false negative rate (2.5%)
  • Salience maps showed that AI decisions were almost never based on the outer region of the femoral neck, even with images where that region was clinically relevant (but it still often made the right diagnosis)
  • The model scored a high AUC with the Stanford data, but showed a substantial model operating point shift

The Case for Auditing – Although the study might have not started with this goal, it ended up becoming an argument for more sophisticated preclinical auditing. It even led to a separate paper outlining their algorithmic auditing process, which among other things suggested that AI users and developers should co-own audits.

The Takeaway

Auditing generally isn’t the most exciting topic in any field, but this study shows that it’s exceptionally important for imaging AI. It also suggests that audits might be necessary for achieving the most exciting parts of AI, like improving outcomes and efficiency, earning clinician trust, and increasing adoption.A new Lancet Digital Health study could have become one of the many “AI rivals radiologists” papers that we see each week, but it instead served as an important lesson that traditional performance tests might not prove that AI models are actually safe for clinical use.

Envisioning A Difficult Future

S&P Global Ratings’ decision to downgrade Envision Healthcare might have been largely overlooked during another busy healthcare news week, but it could prove to be part of one of the biggest stories in healthcare economics.

About Envision – The private equity-backed mega practice employs more than 25k clinicians across hundreds of US hospitals, including roughly 800 radiologists who perform over 10 million reads per year. 

The Downgrade – S&P downgraded Envision Healthcare to ‘CCC’ (from CCC+) and assigned it a ‘Negative’ CreditWatch rating, citing the company’s “inadequate” liquidity, a missed financial filing deadline, and a challenging path forward. Envision owes $700M by October 2023 (and more after that), but S&P expects the company to end 2022 with less than $100M in cash, risking more short-term downgrades and bigger long-term disruptions.

The Background – If you’re wondering how Envision found itself in this situation, a recent Prospect.org exposé has some answers (or at least its version of the answers):

  • When private equity giant KKR acquired Envision in 2018, it burdened the company with billions in debt, including a $5.3B first-lien term loan due in 2025
  • KKR’s initial strategy involved keeping most of Envision’s clinicians out-of-network (and earning higher surprise billing rates), but Envision moved many of its physicians in-network amid public backlash and looming legislation 
  • Ongoing surprise billing legislation spooked investors, causing Envision’s first-lien term loan to trade for 50 cents on the dollar in early 2020, before bouncing back to a somewhat-less-distressed 70-80 cent range later that year
  • The COVID pandemic further strained Envision’s finances, as many of its core specialties saw major volume declines (emergency, anesthesiology, radiology, GI, etc.)
  • Envision avoided bankruptcy thanks to an estimated $100M CARES Act bailout and help from its creditors
  • The final surprise billing legislation turned out to be pretty favorable for Envision, but not as favorable as back in the pre-legislation days
  • As of March 2022, Envision’s $5.3B first-lien term loan was still trading in distressed territory (73 cents), and it has other loans to pay off too

The Path Forward – It’s hard to predict how this will work out for Envision, although Prospect.org suggests that it might involve KKR splitting Envision into two companies. One could be saddled with all the debt and destined for bankruptcy, while the other entity (and KKR) could emerge “unscathed.”

The Takeaway

For many in healthcare this is a cautionary tale about what can go wrong when private equity influences are combined with an over-reliance on a disputed business model (in this case surprise billing) and a global pandemic. It also makes you wonder if other mega practices are in similar situations.

Imaging AI’s Big 2021

Signify Research’s latest imaging AI VC funding report revealed an unexpected surge in 2021, along with major funding shifts that might explain why many of us didn’t see it coming. Here’s some of Signify’s big takeaways and here’s where to get the full report.

AI’s Path to $3.47B – Imaging AI startups have raised $3.47B in venture funding since 2015, helped by a record-high $815M in 2021 after several years of falling investments (vs. 2020’s $592M, 2019’s $450M, 2018’s $790M).

Big Get Bigger – That $3.47B funding total came from over 200 companies and 290 deals, although the 25 highest-funded companies were responsible for 80% of all capital raised. VCs  increased their focus on established AI companies in 2021, resulting in record-high late-stage funding (~$723.5M), record-low Pre-Seed/Seed funding (~$7M), and a major increase in average deal size (~$33M vs. ~$12M in 2020). 

Made in China – If you’re surprised that 2021 was a record AI funding year, that’s probably because it targeted Chinese companies (~$260M vs. US’ ~$150M), continuing a recent trend (China’s AI VC share was 45% in 2020, 26% in 2019). We’re also seeing major funding go to South Korea and Australia’s top startups, adding to APAC AI vendors’ funding leadership.

Health VC Context – Although imaging AI’s $815M 2021 funding total seems big for a category that’s figuring out its path towards full adoption, the amount VC firms are investing in other areas of healthcare makes it seem pretty reasonable. Our two previous Digital Health Wire issues featured seven digital health startup funding rounds with a total value of $267M (and that’s from just one week).

The Takeaway

Signify correctly points out that imaging AI funding remains strong despite a list of headwinds (COVID, regulatory hurdles, lacking reimbursements), while showing more signs of AI market maturation (larger funding rounds to fewer players) and suggesting that consolidation is on the way. Those factors will likely continue in 2022. However, more innovation is surely on the way too and quite a few regional AI powerhouses still haven’t expanded globally, suggesting that the next steps in AI’s evolution won’t be as straightforward as some might think.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!