Real-World Stroke AI Implementation

Time is brain. That simple saying encapsulates the urgency in diagnosing and treating stroke, when just a few hours can mean a huge difference in a patient’s recovery. A new study in Clinical Radiology shows the potential for Nicolab’s StrokeViewer AI software to improve stroke diagnosis, but also underscores the challenges of real-world AI implementation.

Early stroke research recommended that patients receive treatment – such as with mechanical thrombectomy – within 6-8 hours of stroke onset. 

  • CT is a favored modality to diagnose patients, and the time element is so crucial that some health networks have implemented mobile stroke units with ambulances outfitted with on-board CT scanners. 

AI is another technology that can help speed time to diagnosis. 

  • AI analysis of CT angiography scans can help identify cases of acute ischemic stroke missed by radiologists, in particular cases of large vessel occlusion, for which one study found a 20% miss rate. 

The U.K.’s National Health Service has been looking closely at AI to provide 24/7 LVO detection and improve accuracy in an era of workforce shortages.

  • StrokeView is a cloud-based AI solution that analyzes non-contrast CT, CT angiography, and CT perfusion scans and notifies clinicians when a suspected LVO is detected. Reports can be viewed via PACS or with a smartphone.  

In the study, NHS researchers shared their experiences with StrokeView, which included difficulties with its initial implementation but ultimately improved performance after tweaks to the software.  

  • For example, researchers encountered what they called “technical failures” in the first phase of implementation, mostly related to issues like different protocol names radiographers used for CTA scans that weren’t recognized by the software. 

Nicolab was notified of the issue, and the company performed training sessions with radiographers. A second implementation took place, and researchers found that across 125 suspected stroke cases  … 

  • Sensitivity was 93% in both phases of the study.
  • Specificity rose from the first to second implementation (91% to 94%).
  • The technical failure rate dropped (25% to 17%).
  • Only two cases of technical failure occurred in the last month of the study.

The Takeaway

The new study is a warts-and-all description of a real-world AI implementation. It shows the potential of AI to improve clinical care for a debilitating condition, but also that success may require additional work on the part of both clinicians and AI developers.

Who Owns LVO AI?

The FDA’s public “reminder” that studies analyzed by AI-based LVO detection tools (CADt) still require radiologist interpretation became one of hottest stories in radiology last week, and although many of us didn’t realize, it made a big statement about how AI-based coordination is changing the way care teams and radiologists work together.

The FDA decided to issue this clarification after finding that some providers were using LVO AI tools to guide their stroke treatment decisions and “might not be aware” that they need to base those decisions on radiologist interpretations. The agency reiterated that these tools are only intended to flag suspicious exams and support diagnostic prioritization, revealing plans to work with LVO AI vendors to make sure everyone understands radiologists’ role in these workflows. 

This story was covered in all the major radiology publications and sparked a number of social media discussions with some interesting theories:

  • One social post suggested that the FDA is preemptively taking a stand against autonomous AI
  • Some posts and articles wondered if AI might be overly-influencing radiologists’ diagnoses
  • The Imaging Wire didn’t even mention care coordination until a reader emailed with a clarification and we went back and edited our initial story

That reader had a point. It does seem like this is more of a care coordination issue than an AI diagnostics issue, considering that:

  • These tools send notifications and images to interventionalist/surgeons before radiologists are able to read the same cases
  • Two of the three leading LVO AI care coordination tools are marketed to everyone on the stroke team except radiologists (go check their sites)
  • Before AI care coordination came along, it would have been hard to believe that stroke team members “might not be aware” that they needed to check radiologist interpretations before making care decisions

The Takeaway

LVO AI care coordination tools have been a huge commercial and clinical success, and care coordination platforms are quickly expanding to new use cases.

That seems like good news for emergency patients and care teams, but as the FDA reminded us last week, it also means that we’re going to need more safeguards to ensure that care decisions are based on radiologists’ diagnoses — even if the AI tool already informed care teams what the diagnosis might be.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!