Is There Hope for CT Lung Screening?

New data on CT lung cancer screening rates offer a good news/bad news story. The bad news is that only 21.2% of eligible individuals in four US states got screened, far lower than other exams like breast or colon screening.

The good news is that, as low as the rate was relative to other tests, 21.2% is still much higher than previous estimates. And the study itself found that the rate of CT lung screening has risen over 8 percentage points in 3 years. 

Compliance has lagged with CT lung screening ever since Medicare approved payments for the exam in 2015. A recent JACR study found that screening rates were low for eligible people for both Medicare and commercial insurance (3.4% and 1.8%).

Why is screening compliance so low? Explanations have ranged from fatalism among people who smoke to reimbursement requirements for “shared decision-making,” which unlike other screening exams require patients and providers to discuss CT lung screening before an exam can be ordered.

In this new study in JAMA Network Open, researchers examined screening rates in four states – Maine, Michigan, New Jersey, and Rhode Island – from January 2021 to January 2022. The study drew data from the National Health Interview Survey and weighted it to reflect the population of the US of individuals eligible for CT lung screening, based on the criteria of ages 55-79, 30-pack-year smoking history, and having smoked or quit within the past 15 years. Major findings included: 

  • The rate for CT lung cancer screening was 21.2%, up from 12.8% in 2019
  • People with a primary health professional (PHP) were nearly 6 times more likely to get screened (OR=5.62)
  • The age sweet spot for screening was 65-77, with lower odds for those 55-64 (OR=0.43) and 78-79 (OR=0.17)
  • Rates varied between states, with Rhode Island having the highest rate (30.3%) and New Jersey the lowest (17.5%).
  • Of those who got screened, 27.7% were in poor health and 4.5% had no health insurance

The Takeaway

The findings offer some hope for CT lung screening, as the compliance rate is among the highest we’ve seen among recent research studies. On the other hand, many of those screened were in such poor health they might not benefit from treatment. The high rate of compliance in people with PHPs indicates that promoting screening with these providers could pay off, especially given the requirement for shared decision-making. 

Better Together at SIIM

Humans have a deep-seated need for interpersonal contact, and understanding that need should guide not only how we structure our work relationships in the post-COVID era, but also our development and deployment of new technologies like AI in radiology. 

That’s according to James Whitfill, MD, who gave Thursday’s opening address at SIIM 2023. Whitfill’s talk – which was followed by a raucous audience participation exercise – was a ringing demonstration that in-person meetings like SIIM still have relevance despite the proliferation of Zoom calls and remote work. 

Whitfill, chief transformation officer at HonorHealth in Arizona and an internist at the University of Arizona, was chair of the SIIM board in 2020 when the society made the difficult decision to move its annual meeting to be fully online during the pandemic.

The experience led Whitfill to ponder whether technology designed to help us work and collaborate virtually was an adequate substitute for in-person interaction. Unfortunately, the research suggests otherwise: 

  • Numerous studies have demonstrated the negative effect that the isolation of the COVID pandemic has had on adolescent mental health and academic performance 
  • Loneliness can also have a negative effect on physical well-being, with a recent U.S. Surgeon General’s report finding that prolonged isolation is the health equivalent of smoking 15 cigarettes a day
  • Peer-reviewed studies have shown that people working in in-person collaborative environments are about 10% more productive and creative than those working virtually. 

Whitfill’s talk was especially on-point given recent research indicating that workers across different industries who used AI were more lonely than those who didn’t, a phenomenon that shouldn’t be ignored by those planning radiology’s AI-based future. 

That said, virtual technologies can still play a role in making access to information more equitable. Whitfill noted that some 160 people were following the SIIM proceedings entirely online, and they otherwise would not have been able to benefit from the meeting’s content.

To drive the point home, Whitfill then had audience members participate in a team-based Rochambeau competition that sent peals of laughter ringing through Austin Convention Center.  

The Takeaway
Whitfill’s point was underscored repeatedly by SIIM 2023 attendees, who reiterated the value of interpersonal connections and networking at the conference. It’s ironic that a meeting devoted at least in part to intelligence that’s artificial has made us better appreciate relationships that are real.

AI Reinvigorates SIIM 2023

AUSTIN – Before AI came along, the Society for Imaging Informatics in Medicine (SIIM) seemed to be a conference in search of itself. SIIM (and before it, SCAR) built its reputation on education and training for radiology’s shift to digital image management. 

But what happens when the dog catches the truck? Radiology eventually fully adopted digital imaging, and that meant less need to teach people about technology they were already using every day.

Fast forward to the AI era, and SIIM seems to have found its new mission. Once again, radiology is faced with a transformative IT technology that few understand and even fewer know how to put into clinical practice. With its emphasis on education and networking, SIIM is a great forum to learn how to do both. 

That’s exemplified by the SIIM keynote address on Wednesday, by Ziad Obermeyer, MD, a physician and researcher in machine learning at UC Berkeley who has published important research on bias in machine learning. 

While not a radiologist, Obermeyer served up a fascinating talk on how AI should be designed and adopted to have maximum impact. His advice included:

  • Don’t design AI to perform the same tasks humans do already. Train algorithms to perform in ways that make up for the shortcomings of humans.
  • Training algorithms on medical knowledge from decades ago is likely to produce bias when today’s patient populations don’t match those of the past.
  • Access to high-quality data is key to algorithm development. Data should be considered a public good, but there is too much friction in getting it. 

To solve some of these challenges, Obermeyer is involved in two projects, Nightingale Open Science to connect researchers with health systems, and Dandelion Health, designed to help AI developers access clinical data they need to test their algorithms. 

The Takeaway 

The rise of AI – particularly generative AI models like ChatGPT –  has given SIIM a shot in the arm from a content perspective, and the return of in-person meetings plays to the conference’s strength as an intimate get-together where the networking and relationship-building is almost as important as the content. Please follow along with the proceedings of SIIM 2023 on our Twitter and LinkedIn pages. 

Taking Ultrasound Beyond Breast Density

When should breast ultrasound be used as part of mammography screening? It’s often used in cases of dense breast tissue, but other factors should also come into play, say researchers in a new study in Cancer

Conventional X-ray mammography has difficulties when used for screening women with dense breast tissue, so supplemental modalities like ultrasound and MRI are called into play. But focusing too much on breast density alone could mean that many women who are at high risk of breast cancer don’t get the additional imaging they need.

To study this issue, researchers analyzed the risk of mammography screening failures (defined as interval invasive cancer or advanced cancer) in ~825k screening mammograms in ~377k women, and more than ~38k screening ultrasound studies in ~29k women. All exams were acquired from 2014 to 2020 at 32 healthcare facilities across the US.

Researchers then compared the mammography failure rate in women who got ultrasound and mammography to those who got mammography alone. Their findings included: 

  • Ultrasound was appropriately targeted at women with heterogeneously or extremely dense breasts, with 95.3% getting scans
  • However, based on their complete risk factor profile, women with dense breasts who got ultrasound had only a modestly higher risk of interval breast cancer compared to women who only got mammography (23.7% vs. 18.5%) 
  • More than half of women undergoing ultrasound screening had low or average risk of an interval breast cancer based on their risk factor profile, despite having dense breasts
  • The risk of advanced cancer was very close between the two groups (32.0% vs. 30.5%), suggesting that a large fraction of women at risk of advanced cancer are getting only mammography screening with no supplemental imaging

The Takeaway 

On the positive side, ultrasound is being widely used in women with dense breast tissue, indicating success in identifying these women and getting them the supplemental imaging they need. But the high rate of advanced cancer in women who only received mammography indicates that consideration of other risk factors – such as family history of breast cancer and body mass index – is necessary beyond just breast tissue density to identify women in need of supplemental imaging. 

Mayo’s AI Model

SAN DIEGO – What’s behind the slow clinical adoption of artificial intelligence? That question permeated the discussion at this week’s AIMed Global Summit, an up-and-coming conference dedicated to AI in healthcare.

Running June 4-7, this week’s meeting saw hundreds of healthcare professionals gather in San Diego. Radiology figured prominently as the medical specialty with a lion’s share of the over 500 FDA-cleared AI algorithms available for clinical use.

But being available for use and actually being used are two different things. A common refrain at AIMed 2023 was slow clinical uptake of AI, a problem widely attributed to difficulties in deploying and implementing the technology. One speaker noted that less than 5% of practices are using AI today.

One way to spur AI adoption is the platform approach, in which AI apps are vetted by a single entity for inclusion in a marketplace from which clinicians can pick and choose what they want. 

The platform approach is gaining steam in radiology, but Mayo Clinic is rolling the platform concept out across its entire healthcare enterprise. First launched in 2019, Mayo Clinic Platform aims to help clinicians enjoy the benefits of AI without the implementation headache, according to Halim Abbas, senior director of AI at Mayo, who discussed Mayo’s progress on the platform at AIMed. 

The Mayo Clinic Platform has several main features:

  • Each medical specialty maintains its own internal AI R&D team with access to its own AI applications 
  • At the same time, Mayo operates a centralized AI operation that provides tools and services accessible across departments, such as data de-identification and harmonization, augmented data curation, and validation benchmarks
  • Clinical data is made available outside the -ologies, but the data is anonymized and secured, an approach Mayo calls “data behind glass”

Mayo Clinic Platform gives different -ologies some ownership of AI, but centralizes key functions and services to improve AI efficiency and smooth implementation. 

The Takeaway 

Mayo Clinic Platform offers an intriguing model for AI deployment. By removing AI’s implementation pain points, Mayo hopes to ramp up clinical utilization, and Mayo has the organizational heft and technical expertise to make it work (see below for news on Mayo’s new generative AI deal with Google Cloud). 

But can Mayo’s AI model be duplicated at smaller health systems and community providers that don’t have its IT resources? Maybe we’ll find out at AIMed 2024.

When AI Goes Wrong

What impact do incorrect AI results have on radiologist performance? That question was the focus of a new study in European Radiology in which radiologists who received incorrect AI results were more likely to make wrong decisions on patient follow-up – even though they would have been correct without AI’s help.

The accuracy of AI has become a major concern as deep learning models like ChatGPT become more powerful and come closer to routine use. There’s even a term – the “hallucination effect” – for when AI models veer off script to produce text that sounds plausible but in fact is incorrect.

While AI hallucinations may not be an issue in healthcare – yet – there is still concern about the impact that AI algorithms are having on clinicians, both in terms of diagnostic performance and workflow. 

To see what happens when AI goes wrong, researchers from Brown University sent 90 chest radiographs with “sham” AI results to six radiologists, with 50% of the studies positive for lung cancer. They employed different strategies for AI use, ranging from keeping the AI recommendations in the patient’s record to deleting them after the interpretation was made. Findings included:

  • When AI falsely called a true-pathology case “normal,” radiologists’ false-negative rates rose compared to when they didn’t use AI (20.7-33.0% depending on AI use strategy vs. 2.7%)
  • AI calling a negative case “abnormal” boosted radiologists’ false-positive rates compared to without AI (80.5-86.0% vs. 51.4%)
  • Not surprisingly, when AI calls were correct, radiologists were more accurate with AI than without, with increases in both true-positive rates (94.7-97.8% vs. 88.3%) and true-negative rates (89.7-90.7% vs. 77.3%)

Fortunately, the researchers offered suggestions on how to mitigate the impact of incorrect AI. Radiologists had fewer false negatives when AI provided a box around the region of suspicion, a phenomenon the researchers said could be related to AI helping radiologists focus. 

Also, radiologists’ false positives were higher when AI results were retained in the patient record versus when they were deleted. Researchers said this was evidence that radiologists were less likely to disagree with AI if there was a record of the disagreement occurring. 

The Takeaway 
As AI becomes more widespread clinically, studies like this will become increasingly important in shaping how the technology is used in the real world, and add to previous research on AI’s impact. Awareness that AI is imperfect – and strategies that take that awareness into account – will become key to any AI implementation.

When TIA Imaging Is Incomplete

A new study in AJR calculates the cost to patients when imaging evaluation is incomplete, finding that people with transient ischemic attack (TIA) who didn’t get full imaging workups were 30% more likely to have a new stroke diagnosis within the next 90 days.

Some 240,000 people experience TIA annually in the US. While TIAs typically last only a few minutes and don’t cause lasting neurological damage, they can be a warning sign of future neurological events to come.

Medical imaging – typically CT and MRI – are key in the neurological workup of TIA patients, and TIA can be treated with antithrombotic therapy, which reduces the likelihood of a stroke 90 days later. Therefore, guidelines call for prompt neuroimaging of the brain and neck in TIA patients, typically within 48 hours, with MRI the primary and CT the secondary options.

But what happens if TIA patients don’t get complete imaging as part of their workup? To answer this question, researchers from Colorado and California analyzed a database of 111,417 people seen at 4,253 hospitals who presented to the ED with TIA symptoms from 2016 to 2017. 

They tracked which patients received complete neurovascular imaging within 48 hours as part of their workup, then followed how many received a primary diagnosis of stroke within 90 days of the initial TIA encounter. Findings included:

  • 62.7% of patients received brain imaging and complete neurovascular imaging (both head and neck) within 48 hours
  • 37.3% received brain imaging but incomplete neurovascular imaging 
  • There was a higher rate of stroke at 90 days in TIA patients with incomplete imaging workup (7.0% vs. 4.4%)
  • Patients with incomplete neurovascular imaging also had a greater chance of stroke at 90 days (OR=1.3)

The Takeaway 

While the benefits of neuroimaging for stroke have been demonstrated in the literature, imaging’s value for TIA has been less certain – until now. The AJR study shows that neuroimaging is just as vital for TIA workup, and it supports guidelines calling for cross-sectional imaging of the head and neck within 48 hours of TIA.

CT Flexes Muscles in Heart

CT continues to flex its muscles as a tool for predicting heart disease risk, in large measure due to its prowess for coronary artery calcium scoring. In JAMA, a new paper found CT-derived CAC scores to be more effective in predicting coronary heart disease than genetic scores when added to traditional risk scoring. 

Traditional risk scoring – based on factors such as cholesterol levels, blood pressure, and smoking status – has done a good job of directing cholesterol-lowering statin therapy to people at risk of future cardiac events. But these scores still provide an imprecise estimate of coronary heart disease risk. 

Two relatively new tools for improving CHD risk prediction are CAC scoring from CT scans and polygenic risk factors, based on genetic variants that could predispose people toward heart disease. But the impact of either of these tools (or both together) when added to traditional risk scoring hasn’t been investigated. 

To answer this question, researchers analyzed the impact of both types of scoring on participants in the Multi-Ethnic Study of Atherosclerosis (1,991 people) and the Rotterdam Study (1,217 people). CHD risk was predicted based on both CAC and PRS and then compared to actual CHD events over the long term. 

They also tracked how accurate both tools were in reclassifying people into different risk categories (higher than 7.5% risk calls for statins). Findings included: 

  • Both CAC scores and PRS were effective in predicting 10-year risk of CHD in the MESA dataset (HR=2.60 for CAC score, HR=1.43 for PRS). Scores were slightly lower but similar in the Rotterdam Study
  • The C statistic was higher for CAC scoring than PRS (0.76 vs. 0.69; 0.7 indicates a “good” model and 0.8 a “strong” model) 
  • The improved accuracy in reclassifying patient risk was statistically significant when CAC was added to traditional factors (half of study participants moved into the high-risk group), but not when PRS was added  

The Takeaway 

This study adds to the growing body of evidence supporting cardiac CT as a prognostic tool for heart disease, and reinforces CT’s prowess in the heart. The findings also support the growing chorus in favor of using CT as a screening tool in cases of intermediate or uncertain risk for future heart disease.

AI Investment Shift

VC investment in the AI medical imaging sector has shifted notably in the last couple years, says a new report from UK market intelligence firm Signify Research. The report offers a fascinating look at an industry where almost $5B has been raised since 2015. 

VC investment in the AI medical imaging sector has shifted in the last couple years, with money moving to later-stage companies.

Total Funding Value Drops – Both investors and AI independent software vendors (ISVs) have noticed reduced funding activity, and that’s reflected in the Signify numbers. VC funding of imaging AI firms fell 32% in 2022, to $750.4M, down from a peak of $1.1B in 2021.

Deal Volume Declines – The number of deals getting done has also fallen, to 42 deals in 2022, off 30% compared to 60 in 2021. In imaging AI’s peak year, 2020, 95 funding deals were completed. 

VC Appetite Remains Strong – Despite the declines, VCs still have a strong appetite for radiology AI, but funding has shifted from smaller early-stage deals to larger, late-stage investments. 

HeartFlow Deal Tips Scales – The average deal size has spiked this year to date, to $27.6M, compared to $17.9M in 2022, $18M in 2021, and $7.9M in 2020. Much of the higher 2023 number is driven by HeartFlow’s huge $215M funding round in April; Signify analyst Sanjay Parekh, PhD, told The Imaging Wire he expects the average deal value to fall to $18M by year’s end.

The Rich Get Richer – Much of the funding has concentrated in a dozen or so AI companies that have raised over $100M. Big winners include HeartFlow (over $650M), and Cleerly, Shukun Technology, and Viz.ai (over $250M). Signify’s $100M club is rounded out by Aidoc, Cathworks, Keya Medical, Deepwise Shenrui, Imagen Technologies, Perspectum, Lunit, and Annalise.ai.

US and China Dominate – On a regional basis, VC funding is going to companies in the US (almost $2B) and China ($1.1B). Following them are Israel ($513M), the UK ($310M), and South Korea ($255M).  

The Takeaway 

Signify’s report shows the continuation of trends seen in previous years that point to a maturing market for medical imaging AI. As with any such market, winners and losers are emerging, and VCs are clearly being selective about choosing which horses to put their money on.

The Perils of Worklist Cherry-Picking

If you’re a radiologist, chances are at some point in your career you’ve cherry-picked the worklist. But picking easy, high-RVU imaging studies to read before your colleagues isn’t just rude – it’s bad for patients and bad for healthcare.

That’s according to a new study in Journal of Operations Management that analyzes radiology cherry-picking in the context of operational workflow and efficiency. 

Based on previous research, researchers hypothesized that radiologists who are free to pick from an open worklist would choose the easier studies with the highest compensation – the classic definition of cherry-picking.

To test their theory, they analyzed a dataset of 2.2M studies acquired at 62 hospitals from 2014 to 2017 that were read by 115 different radiologists. They developed a statistical metric called “bang for the buck,” or BFB, to classify the value of an imaging study in terms of interpretation time relative to RVU level. 

They then assessed the impact of BFB on turnaround time (TAT) for different types of imaging exams based on priority, classified as Stat, Expedited, and Routine. Findings included:

  • High-priority Stat studies were reported quickly regardless of BFB, indicating little cherry-picking impact
  • For Routine studies, those with higher BFB had much lower reductions in turnaround — a sign of cherry-picking
  • Adding one high-BFB Routine study to a radiologist’s worklist resulted in a much longer increase in TAT for Expedited exams compared to low-BFB studies (increase of 17.7 minutes vs. 2 minutes)
  • The above delays could result in longer patient lengths of stay that translate to $2.1M-$4.2M in extra costs across the 62 hospitals in the study. 

The findings suggest that radiologists in the study prioritized high-BFB Routine studies over Expedited exams – undermining the exam prioritization system and impacting care for priority cases.

Fortunately, the researchers offer suggestions for countering the cherry-picking effect, such as through intelligent scheduling or even hiding certain studies – like high-BFB Routine exams – from radiologists when there are Expedited studies that need to be read. 

The Takeaway 

The study concludes that radiology’s standard workflow of an open worklist that any radiologist can access can become an “imbalanced compensation scheme” that can lead to poorer service for high-priority tasks. On the positive side, the solutions proposed by the researchers seem tailor-made for IT-based interventions, especially ones that are rooted in AI. 

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!