AI As Malpractice Safety Net

One of the emerging use cases for AI in radiology is as a safety net that could help hospitals avoid malpractice cases by catching errors made by radiologists before they can cause patient harm. The topic was reviewed in a Sunday presentation at RSNA 2024

Clinical AI adoption has been held back by economic factors such as limited reimbursement and the lack of strong return on investment. 

  • Healthcare providers want to know that their AI investments will pay off, either through direct reimbursement from payors or improved operational efficiency.

At the same time, providers face rising malpractice risk, with a number of recent high-profile legal cases.

  • For example, a New York hospital was hit with a $120M verdict after a resident physician working the night shift missed a pulmonary embolism. 

Could AI limit risk by acting as a backstop to radiologists? 

  • At RSNA 2024, Benjamin Strong, MD, chief medical officer at vRad, described how they have deployed AI as a QA safety net. 

vRad mostly develops its own AI algorithms, with the first algorithm deployed in 2015. 

  • vRad is running AI algorithms as a backstop for 13 critical pathologies, from aortic dissection to superior mesenteric artery occlusion.

vRad’s QA workflow begins after the radiologist issues a final report (without using AI), and an algorithm then reviews the report automatically. 

  • If discrepancies are found the report is sent to a second radiologist, who can kick the study back to the original radiologist if they believe an error has occurred. The entire process takes 20 minutes. 

In a review of the program over one year, vRad found …

  • Corrections were made for about 1.5k diagnoses out of 6.7M exams.
  • The top five AI models accounted for over $8M in medical malpractice savings. 
  • Three pathologies – spinal epidural abscess, aortic dissection, and ischemic bowel due to SMA occlusion – would have amounted to $18M in payouts over four years.
  • Adding intracranial hemorrhage and pulmonary embolism creates what Strong called the “Big Five” of pathologies that are either the most frequently missed or the most expensive when missed.

The Takeaway

The findings offer an intriguing new use case for AI adoption. Avoiding just one malpractice verdict or settlement would more than pay for the cost of AI installation, in most cases many times over. How’s that for return on investment?

How Are Doctors Using AI?

How are healthcare providers who have adopted AI really using it? A new Medscape/HIMSS survey found that most providers are using AI for administrative tasks, while medical image analysis is also one of the top AI use cases. 

AI has the potential to revolutionize healthcare, but many industry observers have been frustrated with the slow pace of clinical adoption. 

  • Implementation challenges, regulatory issues, and lack of reimbursement are among the reasons keeping more healthcare providers from embracing the technology.

But the Medscape/HIMSS survey shows some early successes for AI … as well as lingering questions. 

  • Researchers surveyed a total of 846 people in the U.S. who were either executive or clinical leaders, practicing physicians or nurses, or IT professionals, and whose practices were already using AI in some way.

The top four tasks for which AI is being used were administrative rather than clinical, with image analysis occupying the fifth spot … 

  1. Transcribing patient notes (36%). 
  2. Transcribing business meetings (32%).
  3. Creating routine patient communications (29%).
  4. Performing patient record-keeping (27%).
  5. Analyzing medical images (26%).

The survey also analyzed attitudes toward AI, finding …

  • 57% said AI helped them be more efficient and productive.
  • But lower marks were given for reducing staff hours (10%) and lowering costs (31%).
  • AI got the highest marks for helping with transcription of business meetings (77%) and patient notes (73%), reviewing medical literature (72%), and medical image analysis (70%).

The findings track well with developments at last week’s RSNA 2024, where AI algorithms dedicated to non-clinical tasks like radiology report generation, scheduling, and operation analysis showed growing prominence. 

  • Indeed, many AI developers have specifically targeted the non-clinical space, both because commercialization is easier (FDA authorization is not typically needed) and because doctors often say they need more help with administrative rather than clinical tasks.

The Takeaway

While it’s easy to be impatient with AI’s slow uptake, the Medscape/HIMSS survey shows that AI adoption is indeed occurring at medical practices. And while image analysis was radiology’s first AI use case, speeding up workflow and administrative tasks may end up being the technology’s most impactful application.

RSNA Goes All-In on AI

CHICAGO – It’s been AI all the time this week at RSNA 2024. From clinical sessions packed with the latest findings on AI’s utility to technical exhibits crowded with AI vendors, artificial intelligence and its impact on radiology was easily the hottest trend at McCormick Place.

Radiology greeted AI with initial skepticism when the first applications like IBM Watson were introduced at RSNA around a decade ago.

  • But the field’s attitude has been evolving to the point where AI is now being viewed as perhaps the only technology that can save the discipline from the vicious cycle of rising exam volume, falling reimbursement, and pervasive levels of burnout.

RSNA telegraphed the shift last year by announcing that Stanford University’s Curtis Langlotz, MD, PhD, would be RSNA 2024 president. 

  • Langlotz is one of the most respected AI researchers and educators in radiology, and even coined the phrase that while AI would not replace radiologists, radiologists with AI would replace those without it. 

In his president’s address, Langlotz echoed this theme, painting a picture of a future radiology in which humans and machines collaborate to deliver better patient care than either could alone.

  • Langlotz’s talk was followed by a presentation by another prominent AI luminary – Nina Kottler, MD, of Radiology Partners.

Kottler took on the concerns that many in radiology (and in the world at large) have about AI as a disruptive force in a field that cherishes its traditions.

  • She advised radiology to take a leading role in AI adoption, repeating a famous quote that the best way to predict the future is to create it yourself. 

What were the other trends besides AI at RSNA 2024? They included…

  • Photon-counting CT, which is likely to see new market entrants in 2025.
  • Total-body PET, with PET scanners that have extra-long detector arrays.
  • Theranostics, a discipline that integrates diagnosis and therapy and promises to breathe new life into SPECT.
  • CT colonography and CCTA, which will see positive reimbursement changes in 2025.
  • Continued growth of CT lung screening, especially as a tool for opportunistic screening of other conditions.
  • Continued expansion of AI for breast screening.

The Takeaway

The RSNA meeting has been called radiology’s Super Bowl and World Cup all rolled into one, and this year didn’t disappoint. RSNA 2024 showed that radiology is prepared to fully embrace AI – and a future in which humans and machines collaborate to deliver better patient care.

Mammo AI Kicks Off RSNA 2024

Welcome to RSNA 2024! This year’s meeting is starting with a bang, with two important sessions highlighting the key role AI can play in breast screening. 

Sunday’s presentations cap a year that’s seen the publication of several large studies demonstrating that AI can improve breast cancer screening while potentially reducing radiologist workload. 

  • That momentum is continuing at RSNA 2024, with morning and afternoon sessions on Sunday dedicated to mammography AI. 

Some findings from yesterday’s morning session include … 

  • Two AI algorithms were better than one when supporting radiologists in breast screening, with cancer detection ratios relative to historic performance rising from 0.97 to 1.08 with one AI to 1.09 to 1.14 with two algorithms.
  • ScreenPoint Medical’s Transpara algorithm was able to prioritize the worklist for 57% of breast screening exams by assigning risk scores to mammograms, helping reduce report turnaround times. 
  • iCAD’s ProFound AI software helped radiologists detect 7.8% more breast cancers on DBT exams, and cancers were detected at an earlier stage. 
  • Applying AI for breast screening to a racially diverse population yielded evenly distributed performance improvements.

Meanwhile, the Sunday afternoon session also included significant mammography AI presentations, such as …

  • A hybrid screening strategy – with suspicious breast cancer cases only recalled if the AI exhibits high certainty – reduced workload 50%. 
  • Lunit’s Insight DBT AI showed potential to reduce interval cancer rates in DBT screening by identifying 27% of false-negative and 36% of interval cancers.
  • In the ScreenTrustCAD trial in Sweden, using Lunit’s Insight MMG algorithm to replace a double-reading radiologist reduced workload 50% with comparable cancer detection rates.
  • A German screening program found that ScreenPoint Medical’s Transpara AI boosted the cancer detection rate by 8.7% (from 0.68% to 0.74%), with 8.8% of cancers solely detected by AI.
  • Researchers took a look back at abnormality scores from three commercially available AI algorithms after cancer diagnosis, finding evidence that cancers could be detected earlier. 

The Takeaway

Breast screening seems to be the clinical use case where radiologists need the most help, and Sunday’s sessions show the progress AI is making toward achieving that reality. 

Be sure to check back on our X, LinkedIn, and YouTube pages for more coverage of this week’s events in Chicago. And if you see us on the floor of McCormick Place, stop and say hello!

How Should AI Be Monitored?

Once an AI algorithm has been approved and moves into clinical use, how should its performance be monitored? This question was top of mind at last week’s meeting of the FDA’s new Digital Health Advisory Committee.

AI has the potential to radically reshape healthcare and help clinicians manage more patients with fewer staff and other resources. 

  • But AI also represents a regulatory challenge because it’s constantly learning, such that after a few years an AI algorithm might be operating much differently from the version first approved by the FDA – especially with generative AI. 

This conundrum was a point of discussion at last week’s DHAC meeting, which was called specifically to focus on regulation of generative AI, and could result in new rules covering all AI algorithms. (An executive summary that outlines the FDA’s thinking is available for download.)

Radiology was well-represented at DHAC, understandable given it has the lion’s share of authorized algorithms (73% of 950 devices at last count). 

  • A half-dozen radiology AI experts gave presentations over two days, including Parminder Bhatia of GE HealthCare; Nina Kottler, MD, of Radiology Partners; Pranav Rajpurkar, PhD, of Harvard; and Keith Dreyer, DO, PhD, and Bernardo Bizzo, MD, PhD, both of Mass General Brigham and the ACR’s Data Science Institute.  

Dreyer and Bizzo directly addressed the question of post-market AI surveillance, discussing ongoing efforts to track AI performance, including … 

The Takeaway

Last week’s DHAC meeting offers a fascinating glimpse at the issues the FDA is wrestling with as it contemplates stronger regulation of generative AI. Fortunately, radiology has blazed a trail in setting up structures like ARCH-AI and Assess-AI to monitor AI performance, and the FDA is likely to follow the specialty’s lead as it develops a regulatory framework.

Real-World Stroke AI Implementation

Time is brain. That simple saying encapsulates the urgency in diagnosing and treating stroke, when just a few hours can mean a huge difference in a patient’s recovery. A new study in Clinical Radiology shows the potential for Nicolab’s StrokeViewer AI software to improve stroke diagnosis, but also underscores the challenges of real-world AI implementation.

Early stroke research recommended that patients receive treatment – such as with mechanical thrombectomy – within 6-8 hours of stroke onset. 

  • CT is a favored modality to diagnose patients, and the time element is so crucial that some health networks have implemented mobile stroke units with ambulances outfitted with on-board CT scanners. 

AI is another technology that can help speed time to diagnosis. 

  • AI analysis of CT angiography scans can help identify cases of acute ischemic stroke missed by radiologists, in particular cases of large vessel occlusion, for which one study found a 20% miss rate. 

The U.K.’s National Health Service has been looking closely at AI to provide 24/7 LVO detection and improve accuracy in an era of workforce shortages.

  • StrokeView is a cloud-based AI solution that analyzes non-contrast CT, CT angiography, and CT perfusion scans and notifies clinicians when a suspected LVO is detected. Reports can be viewed via PACS or with a smartphone.  

In the study, NHS researchers shared their experiences with StrokeView, which included difficulties with its initial implementation but ultimately improved performance after tweaks to the software.  

  • For example, researchers encountered what they called “technical failures” in the first phase of implementation, mostly related to issues like different protocol names radiographers used for CTA scans that weren’t recognized by the software. 

Nicolab was notified of the issue, and the company performed training sessions with radiographers. A second implementation took place, and researchers found that across 125 suspected stroke cases  … 

  • Sensitivity was 93% in both phases of the study.
  • Specificity rose from the first to second implementation (91% to 94%).
  • The technical failure rate dropped (25% to 17%).
  • Only two cases of technical failure occurred in the last month of the study.

The Takeaway

The new study is a warts-and-all description of a real-world AI implementation. It shows the potential of AI to improve clinical care for a debilitating condition, but also that success may require additional work on the part of both clinicians and AI developers.

Time to Embrace X-Ray AI for Early Lung Cancer Detection

Each year approximately 2 billion chest X-rays are performed globally. They are fast, noninvasive, and a relatively inexpensive radiological examination for front-line diagnostics in outpatient, emergency, or community settings. 

  • But beyond the simplicity of CXR lies a secret weapon in the fight against lung cancer: artificial intelligence. 

Be it serendipitous screening, opportunistic detection, or incidental identification, there is potential for AI incorporated into CXR to screen patients for disease when they are getting an unrelated medical examination. 

  • This could include the patient in the ER undergoing a CXR for suspected broken ribs after a fall, or an individual referred by their doctor for a CXR with suspected pneumonia. These people, without symptoms, may unknowingly have small yet growing pulmonary nodules. 

AI can find these abnormalities and flag them to clinicians as a suspicious finding for further investigation. 

  • This has the potential to find nodules earlier, in the very early stages of lung cancer when it is easier to biopsy or treat. 

Indeed, only 5.8% of eligible ex-smoking Americans undergo CT-based lung cancer screening. 

  • So the ability to cast the detection net wider through incidental pulmonary nodule detection has significant merits. 

Early global studies into the power of AI for incidental pulmonary nodules (IPNs) shows exciting promise.

  • The latest evidence shows one lung cancer detected for every 1,120 CXRs has major implications to diagnose and treat people earlier – and potentially save lives. 

The qXR-LN chest X-ray AI algorithm from Qure.ai is raising the bar for incidental pulmonary nodule detection. In a retrospective study performed on missed or mislabelled US CXR data, qXR-LN achieved an impressive negative predictive value of 96% and an AUC score of 0.99 for detection of pulmonary nodules. 

  • By acting as a second pair of eyes for radiologists, qXR-LN can help detect subtle anatomical anomalies that may otherwise go unnoticed, particularly in asymptomatic patients.

The FDA-cleared solution serves as a crucial second reader, assisting in the review of chest radiographs on the frontal projection. 

  • In another multicenter study involving 40 sites from across the U.S., the qXR-LN algorithm demonstrated an impressive AUC of 94% for scan-level nodule detection, highlighting its potential to significantly impact patient outcomes by identifying early signs of lung cancer that can be easily missed. 

The Takeaway 

By harnessing the power of AI for opportunistic lung cancer surveillance, healthcare providers can adopt a proactive approach to early detection, without significant new investment, and ultimately improving patient survival rates.

Qure.ai will be exhibiting at RSNA 2024, December 1-4. Visit booth #4941 for discussion, debate, and demonstrations.

Sources

AI-based radiodiagnosis using Chest X-rays: A review. Big Data Analytics for Social Impact, Volume 6 – 2023

Results from a feasibility study for integrated TB & lung cancer screening in Vietnam, Abstract presentation UNION CONF 2024: 2560   

Performance of a Chest Radiography AI Algorithm for Detection of Missed or Mislabelled Findings: A Multicenter Study. Diagnostics 12, no. 9 (2022): 2086

Qure.ai. Qure.ai’s AI-Driven Chest X-ray Solution Receives FDA Clearance for Enhanced Lung Nodule Detection. Qure.ai, January 7, 2024

Mammography AI Predicts Cancer Before It’s Detected

A new study highlights the predictive power of AI for mammography screening – before cancers are even detected. Researchers in a study JAMA Network Open found that risk scores generated by Lunit’s Insight MMG algorithm predicted which women would develop breast cancer – years before radiologists found it on mammograms. 

Mammography image analysis has always been one of the most promising use cases for AI – even dating back to the days of computer-aided detection in the early 2000s. 

  • Most mammography AI developers have focused on helping radiologists identify suspicious lesions on mammograms, or triage low-risk studies so they don’t require extra review.

But a funny thing has happened during clinical use of these algorithms – radiologists found that AI-generated risk scores appeared to predict future breast cancers before they could be seen on mammograms. 

  • Insight MMG marks areas of concern and generates a risk score of 0-100 for the presence of breast cancer (higher numbers are worse). 

Researchers decided to investigate the risk scores’ predictive power by applying Insight MMG to screening mammography exams acquired in the BreastScreen Norway program over three biennial rounds of screening from 2004 to 2018. 

  • They then correlated AI risk scores to clinical outcomes in exams for 116k women for up to six years after the initial screening round.

Major findings of the study included … 

  • AI risk scores were higher for women who later developed cancer, 4-6 years before the cancer was detected.
  • The difference in risk scores increased over three screening rounds, from 21 points in the first round to 79 points in the third round.
  • Risk scores had very high accuracy by the third round (AUC=0.93).
  • AI scores were more accurate than existing risk tools like the Tyrer-Cuzick model.

How could AI risk scores be used in clinical practice? 

  • Women without detectable cancer but with high scores could be directed to shorter screening intervals or screening with supplemental modalities like ultrasound or MRI.

The Takeaway
It’s hard to overstate the significance of the new results. While AI for direct mammography image interpretation still seems to be having trouble catching on (just like CAD did), risk prediction is a use case that could direct more effective breast screening. The study is also a major coup for Lunit, continuing a string of impressive clinical results with the company’s technology.

AI Recon Cuts CT Radiation Dose

Artificial intelligence got its start in radiology as a tool to help medical image interpretation, but much of AI’s recent progress is in data reconstruction: improving images before radiologists even get to see them. Two new studies underscore the potential of AI-based reconstruction to reduce CT radiation dose while preserving image quality. 

Radiology vendors and clinicians have been remarkably successful in reducing CT radiation dose over the past two decades, but there’s always room for improvement. 

  • In addition to adjusting CT scanning protocols like tube voltage and current, data reconstruction protocols have been introduced to take images acquired at lower radiation levels and “boost” them to look like full-dose images. 

The arrival of AI and other deep learning-based technologies has turbocharged these efforts. 

They compared DLIR operating at high strength to GE’s older ASiR-V protocol in CCTA scans with lower tube voltage (80 kVp), finding that deep learning reconstruction led to …

  • 42% reduction in radiation dose (2.36 mSv vs. 4.07)
  • 13% reduction in contrast dose (50 mL vs. 58 mL).
  • Better signal- and contrast-to-noise ratios.
  • Higher image quality ratings.

In the second study, researchers from China including two employees of United Imaging Healthcare used a deep learning reconstruction algorithm to test ultralow-dose CT scans for coronary artery calcium scoring. 

  • They wanted to see if CAC scoring could be performed with lower tube voltage and current (80 kVp/20 mAs) and how the protocol compared to existing low-dose scans.

In tests with 156 patients, they found the ultralow-dose protocol produced …

  • Lower radiation dose (0.09 vs. 0.49 mSv).
  • No difference in CAC scoring or risk categorization. 
  • Higher contrast-to-noise ratio.

The Takeaway

AI-based data reconstruction gives radiologists the best of both worlds: lower radiation dose with better-quality images. These two new studies illustrate AI’s potential for lowering CT dose to previously unheard-of levels, with major benefits for patients.

Imaging News from ESC 2024

The European Society of Cardiology annual meeting concluded on September 2 in London, with around 32k clinicians from 171 countries attending some 4.4k presentations. Organizers reported that attendance finally rebounded to pre-COVID numbers. 

While much of ESC 2024 focused on treatments for cardiovascular disease, diagnosis with medical imaging still played a prominent role. 

  • Cardiac CT dominated many ESC sessions, and AI showed it is nearly as hot in cardiology as it is in radiology. 

Major imaging-related ESC presentations included…

  • A track on cardiac CT that underscored CT’s prognostic value:
    • Myocardial revascularization patients who got FFR-CT had lower hazard ratios for MACE and all-cause mortality (HR=0.73 and 0.48).
    • Incidental coronary artery anomalies appeared on 1.45% of CCTA scans for patients with suspected coronary artery disease.
  • AI flexed its muscles in a machine learning track:
    • AI of low-dose CT scans had an AUC of 0.95 for predicting pulmonary congestion, a sign of acute heart failure. 
    • Echocardiography AI identified HFpEF with higher AUC than clinical models (0.75 vs. 0.69).
    • AI of transthoracic echo detected hypertrophic cardiomyopathy with AUC=0.85.

Another ESC hot topic was CT for calculating coronary artery calcium (CAC) scores, a possible predictor of heart disease. Sessions found … 

  • AI-generated volumetry of cardiac chambers based on CAC scans better predicted cardiovascular events than Agatston scores over 15 years of follow-up in an analysis of 5.8k patients from the MESA study. 
  • AI-CAC with CT was comparable to cardiac MRI read by humans for predicting atrial fibrillation (0.802 vs. 0.798) and stroke (0.762 vs. 0.751) over 15 years, which could give an edge to AI-CAC given its automated nature.
  • An AI algorithm enabled opportunistic screening of CAC quantification from non-gated chest CT scans of 631 patients, finding high CAC scores in 13%. Many got statins, while 22 got additional imaging and 2 intervention.
  • AI-generated CAC scores were also highlighted in a Polish study, detecting CAC on contrast CT at a rate comparable to humans on non-contrast CT (77% vs. 79%), possibly eliminating the need for additional non-contrast CT.  

The Takeaway

This week’s ESC 2024 sessions demonstrate the vital role of imaging in diagnosing and treating cardiovascular disease. While radiologists may not control the patients, they can always apply knowledge of advances in other disciplines to their work.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!