Should Patients Get Their Radiology Reports?

It’s one of radiology’s great dilemmas – should patients get their own radiology reports? A new review article in JACR examines this question in more detail, documenting shifting attitudes toward data sharing among radiologists, referring physicians, and patients themselves.

In reality, the question of whether patients should get their own reports has been settled by the 2022 implementation of federal information blocking rules that prevent providers from withholding patient data. 

  • But open questions remain, such as the best mechanisms for delivering data to patients and how to ensure they aren’t confused or alarmed by radiology findings.

To that end, researchers conducted a systematic review of studies from 2007 to 2023 on patient access to radiology reports, eventually identifying 33 publications that revealed …

  • 70% of studies found patients expressing positive preference toward accessing their radiology reports, a trend consistent over the entire study period.
  • 42% of studies documented patient difficulties in understanding medical terminology.
  • 33% highlighted concerns about patient anxiety and emotional impact.
  • Physician opinions on report sharing shifted from 2010 to 2022, from initial dissatisfaction to a gradual appreciation of its benefits.
  • Most studies focused on patient opinions rather than those of referring physicians and radiologists, whose opinions were found in only 18% and 9% of studies, respectively.

A major problem identified by the researchers is that radiology reports have medical terminology that isn’t easily understood by patients – this can lead to confusion and anxiety.

  • Communicating findings in plain language could be one solution, but the researchers said little progress has been made due to “resistance from radiologists and entrenched reporting practices.” 

Although it wasn’t mentioned by the study authors, generative AI offers one possible solution by using natural language processing algorithms to create patient-friendly versions of clinical reports.

The Takeaway

Once patients get access to their own reports, it’s impossible to put that genie back in the bottle. Rather than debating whether patients should get radiology reports, the question now should be how radiologists can ensure their reports will be understood without confusion by their ultimate customer – patients.

AI Tug of War Continues

The ongoing tug of war over AI’s value to radiology continues. This time the rope has moved in AI’s favor with publication of a new study in JAMA Network Open that shows the potential of a new type of AI language model for creating radiology reports.

  • Headlines about AI have ping-ponged in recent weeks, from positive studies like MASAI and PERFORMS to more equivocal trials like a chest X-ray study in Radiology and news from the UK that healthcare authorities may not be ready for chest X-ray AI’s full clinical roll-out. 

In the new paper, Northwestern University researchers tested a chest X-ray AI algorithm they developed with a transformer technique, a type of generative AI language model that can both analyze images and generate radiology text as output. 

  • Transformer language models show promise due to their ability to combine both image and non-image data, as researchers showed in a paper last week.

The Northwestern researchers tested their transformer model in 500 chest radiographs of patients evaluated overnight in the emergency department from January 2022 to January 2023. 

Reports generated by AI were then compared to reports from a teleradiologist as well as the final report by an in-house radiologist, which was set as the gold standard. The researchers found that AI-generated reports …

  • Had sensitivity a bit lower than teleradiology reports (85% vs. 92%)
  • Had specificity a bit higher (99% vs. 97%)
  • In some cases improved on the in-house radiology report by detecting subtle abnormalities missed by the radiologist

Generative AI language models like the Northwestern algorithm could perform better than algorithms that rely on a classification approach to predicting the presence of pathology. Such models limit medical diagnoses to yes/no predictions that may omit context that’s relevant to clinical care, the researchers believe. 

In real-world clinical use, the Northwestern team thinks their model could assist emergency physicians in circumstances where in-house radiologists or teleradiologists aren’t immediately available, helping triage emergent cases.

The Takeaway

After the negative headlines of the last few weeks, it’s good to see positive news about AI again. Although the current study is relatively small and much larger trials are needed, the Northwestern research has promising implications for the future of transformer-based AI language models in radiology.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!