More Support for CT Lung Cancer Screening

Yet another study supporting CT lung cancer screening has been published, adding to a growing body of evidence that population-based CT screening programs will be effective in reducing lung cancer deaths. 

The new study comes from European Radiology, where researchers from Hungary describe findings from HUNCHEST-II, a population-based program that screened 4.2k high-risk people at 18 institutions. 

  • Screening criteria were largely similar to other studies: people between the ages of 50 and 75 who were current or former smokers with at least 25 pack-year histories. Former smokers had quit within the last 15 years. 

Recruitment for HUNCHEST-II took place from September 2019 to January 2022. Participants received a baseline low-dose CT (LDCT) scan, with the study protocol calling for annual follow-up scans (more on this later). Researchers found: 

  • The prevalence of baseline screening exams positive for lung cancer was 4.1%, comparable to the NELSON trial (2.3%) but much lower than the NLST (27%)
  • 1.8% of participants were diagnosed with lung cancer throughout screening rounds
  • 1.5% of participants had their cancer found with the baseline exam
  • Positive predictive value was 58%, at the high end of population-based lung screening programs
  • 79% of screen-detected cancers were early stage, making them well-suited for treatment
  • False-positive rate was 42%, a figure the authors said was “concerning”

Taking a deeper dive into the data produces interesting revelations. Overdiagnosis is a major concern with any screening test; it was a particular problem with NLST but was lower with HUNCHEST-II. 

  • Researchers said they used a volume-based nodule evaluation protocol, which reduced the false-positive rate compared to the nodule diameter-based approach in NLST.

Also, a high attrition rate occurred between the baseline scan and annual screening rounds, with only 12% of individuals with negative baseline LDCT results going on to follow-up screening (although the COVID-19 pandemic may have affected these results). 

The Takeaway

The HUNCHEST-II results add to the growing momentum in favor of national population-based CT lung screening programs. Germany is planning to implement a program in early 2024, and Taiwan is moving in the same direction. The question is, does the US need to step up its game as screening compliance rates remain low?

Accessing Quality Data for AI Training

One of the biggest roadblocks in medical AI development is the lack of high-quality, diverse data for these technologies to train on.

What Is the Issue with Data Access?

Artificial Intelligence (AI) has emerged as a game-changer in the realm of medical imaging, with immense potential to revolutionize clinical practices. AI-powered medical imaging can efficiently identify intricate patterns within data and provide quantitative assessments of disease biomarkers. This technology not only enhances the accuracy of diagnosis but can also significantly speed up the diagnostic process, ultimately improving patient outcomes.

While the landscape is promising, medical innovators grapple with challenges in accessing high-quality, diverse, and timely data, which is vital for training AI and driving progress.

A 2019 study from the Massachusetts Institute of Technology found that over half of medical AI studies predominantly relied on databases from high-income countries, particularly the United States and China. If models trained on homogenous data are used clinically in diverse populations, then it could pose a risk to patients and worsen health inequalities experienced by underrepresented groups. In the United States, If the Food and Drug Administration deems these risks to be too high, then they could even reject a product’s application for approval. 

In trying to get hold of the best training data, AI developers, particularly startups and individual researchers, face a web of complexities, including legal, ethical, and technical considerations. Issues like data privacy, security, interoperability, and data quality compound these challenges, all of which are crucial in the effective and responsible utilization of healthcare data.

One company working to overcome these hurdles in hope of accelerated and high-quality innovations is Gradient Health.

Gradient Health’s Approach

Gradient Health offers AI developers instant access to one of the world’s largest libraries of anonymized medical images, sourced from hundreds of global hospitals, clinics, and research centers. This data is meticulously de-identified for compliance and can be tailored by vendors to suit their project’s needs and exported in machine learning-ready DICOM + JSON formats.

By partnering with Gradient Health, innovators can use these extensive, diverse datasets to train and validate their AI algorithms, mitigating bias in medical AI and advancing the development of precise, high-quality medical solutions.

Gaining access to top-tier data at the outset of the development process promises long-term benefits. Here’s how:

  • Expand Market Presence: Access the latest cross-vendor datasets to develop medical innovations, expanding your market share.
  • Global Expansion: Enter new regions swiftly with locally sourced data from your target markets, accelerating your global reach.
  • Competitive Edge: Obtain on-demand training data for imaging modalities and disease areas, facilitating product portfolio expansion.
  • Speed to Market: Quickly acquire data for product training and validation, reducing sourcing time and expediting regulatory clearances for faster patient delivery.

“After looking for a data provider for many weeks, I was not able to get even a sample delivery within one month. I was immensely glad to work with Gradient and go from first contact to final delivery within one week!” said Julien Schmidt, chief operations officer and co-founder at Mango Medical.

The Outlook

In recent years, medical AI has experienced significant growth. Innovations in medical imaging in particular have played a pivotal role in enabling healthcare professionals to identify diseases earlier and more accurately in patients with a range of conditions. 

Gradient Health offers a data-compliant, intuitive platform for AI developers, facilitating access to the essential data required to train these critical technologies. This approach holds the potential to save time, resources, and, most importantly, lives. 

More information about Gradient Health is available on the company’s website. They will also be exhibiting at RSNA 2023 in booth #5149 in the South Hall.

Unpacking the Biden Administration’s New AI Order

It seems like watershed moments in AI are happening on a weekly basis now. This time, the big news is the Biden Administration’s sweeping executive order that directs federal regulation of AI across multiple industries – including healthcare. 

The order comes as AI is becoming a clinical reality for many applications. 

  • The number of AI algorithms cleared by the FDA has been surging, and clinicians – particularly radiologists – are getting access to new tools on an almost daily basis.

But AI’s rapid growth – and in particular the rise of generative AI technologies like ChatGPT – have raised questions about its future impact on patient care and whether the FDA’s existing regulatory structure is suitable for such a new technology. 

The executive order appears to be an effort to get ahead of these trends. When it comes to healthcare, its major elements are summarized in a succinct analysis of the plan by Health Law Advisor. In short, the order: 

  • Calls on HHS to work with the VA and Department of Defense to create an HHS task force on AI within 90 days
  • Requires the task force to develop a strategic plan within a year that could include regulatory action regarding the deployment and use of AI for applications such as healthcare delivery, research, and drug and device safety
  • Orders HHS to develop a strategy within 180 days to determine if AI-enabled technologies in healthcare “maintain appropriate levels of quality” – basically, a review of the FDA’s authorization process
  • Requires HHS to set up an AI safety program within a year, in conjunction with patient safety organizations
  • Tells HHS to develop a strategy for regulating AI in drug development

Most analysts are viewing the executive order as the Biden Administration’s attempt to manage both risk and opportunity. 

  • The risk is that AI developers lose control of the technology, with consequences such as patients potentially harmed by inaccurate AI. The opportunity is for the US to become a leader in AI development by developing a long-term AI strategy. 

The Takeaway

The question is whether an industry that’s as fast-moving as AI – with headlines changing by the week – will lend itself to the sort of centralized long-term planning envisioned in the Biden Administration’s executive order. Time will tell.

Predicting the Future of Radiology AI

Making predictions is a messy business (just ask Geoffrey Hinton). So we’re always appreciative whenever key opinion leaders stick their necks out to offer thoughts on where radiology is headed and the major trends that will shape the specialty’s future. 

Two of radiology’s top thought leaders on AI and imaging informatics – Curtis Langlotz, MD, PhD, and Paul Chang, MD – gaze into the crystal ball in two articles published this week in Radiology as part of the journal’s centennial celebration. 

Langlotz offers 10 predictions on radiology AI’s future, briefly summarized below:

  • Radiology will continue its leadership position when it comes to AI adoption in medicine, as evidenced by its dominance of FDA marketing authorizations
  • Virtual assistants will help radiologists draft reports – and reduce burnout
  • Radiology workstations will become cloud-based cockpits that seamlessly unify image display, reporting, and AI
  • Large language models like ChatGPT will help patients better understand their radiology reports
  • The FDA will reform its regulation of AI to be more flexible and speed AI authorizations (see our article in The Wire below)
  • Large databases like the Medical Imaging and Data Resource Center (MIDRC) will spur data sharing and, in turn, more rapid AI development

Langlotz’s predictions are echoed by Chang’s accompanying article in Radiology in which he predicts the future of imaging informatics in the coming age. Like Langlotz, Chang sees the new array of AI-enabled tools as beneficial agents that will help radiologists manage growing workloads through dashboards, enhanced radiology reports, and workflow automation. 

The Takeaway

This week’s articles are required reading for anyone following the meteoric growth of AI in radiology. Far from Hinton’s dystopian view of a world without radiologists, Langlotz and Chang predict a future in which AI and IT technologies assist radiologists to do their jobs better and with less stress. We know which vision we prefer.

FDA Data Show AI Approval Boom

In the previous issue of The Imaging Wire, we discovered how venture capital investment in AI developers is fueling rapid growth in new AI applications for radiologists (despite a slowdown this year). 

This trend was underscored late last week with new data from the FDA showing strong growth in the number of regulatory authorizations of AI and machine learning-enabled devices in calendar 2023 compared to the year before. The findings show:

  • A resurgence of AI/ML authorizations this year, with over 30% growth compared to 14% in 2022 and 15% in 2021 – The last time authorizations grew this fast was in 2020 (+39%)
  • The FDA authorized 171 AI/ML-enabled devices in the past year. Of the total, 155 had final decision dates between August 1, 2022 to July 30, 2023, while 16 were reclassifications from prior periods 
  • Devices intended for radiology made up 79% of the total (122/155), an impressive number but down slightly compared to 87% in 2022 
  • Other medical specialities include cardiology (9%), neurology (5%), and gastroenterology/urology (4%)

One interesting wrinkle in the report was the fact that despite all the buzz around large language models for generative AI, the FDA has yet to authorize a device that uses generative AI or that is powered by LLMs. 

The Takeaway

The FDA’s new report confirms that radiology AI shows no sign of slowing down, despite a drop in AI investment this year. 

The data also offer perspective on a JACR report last week predicting that by 2035 radiology could be seeing 350 new AI/ML product approvals for the year. Product approvals would only have to grow at about a 10% annual rate to hit that number – a figure that seems perfectly achievable given the new FDA report.

What’s Fueling AI’s Growth

It’s no secret that the rapid growth of AI in radiology is being fueled by venture capital firms eager to see a payoff for early investments in startup AI developers. But are there signs that VCs’ appetite for radiology AI is starting to wane?

Maybe. And maybe not. While one new analysis shows that AI investments slowed in 2023 compared to the year before, another predicts that over the long term, VC investing will spur a boom in AI development that is likely to transform radiology. 

First up is an update by Signify Research to its ongoing analysis of VC funding. The new numbers show that through Q3 2023, the number of medical imaging AI deals has fallen compared to Q3 2022 (24 vs. 40). 

  • Total funding has also fallen for the second straight year, to $501M year-to-date in 2023. That compares to $771M through the third quarter of 2022, and $1.1B through the corresponding quarter of 2021. 

On the other hand, the average deal size has grown to an all-time high of $20.9M, compared to 2022 ($15.4M) and 2021 ($18M). 

  • And one company – Rapid AI – joined the exclusive club of just 14 AI vendors that have raised over $100M with a $75M Series C round in July 2023. 

In a look forward at AI’s future, a new analysis in JACR by researchers from the ACR Data Science Institute (DSI) directly ties VC funding to healthcare AI software development, predicting that every $1B in funding translates into 11 new product approvals, with a six-year lag between funding and approval. 

  • And the authors forecast long-term growth: In 2022 there were 69 FDA-approved products, but by 2035, funding is expected to reach $31B for the year, resulting in the release of a staggering 350 new AI products that year.

Further, the ACR DSI authors see a virtuous cycle developing, as increasing AI adoption spurs more investment that creates more products available to help radiologists with their workloads. 

The Takeaway

The numbers from Signify and ACR DSI don’t match up exactly, but together they paint a picture of a market segment that continues to enjoy massive VC investment. While the precise numbers may fluctuate year to year, investor interest in medical imaging AI will fuel innovation that promises to transform how radiology is practiced in years to come.

PET’s Milestone Moment

In a milestone moment for PET, CMS has ended its policy of only paying for PET scans of dementia patients if they are enrolled in a clinical trial. The move paves the way for broader use of PET for conditions like Alzheimer’s disease as new diagnostic and therapeutic agents become available. 

CMS said it was rescinding its coverage with evidence development (CED) requirement for PET payments within Medicare and Medicaid. 

  • Advocates for PET have chafed at the policy since it was established in 2013, claiming that it restricted use of PET to detect buildup of amyloid and tau in the brain – widely considered to be precursors to Alzheimer’s disease. The policy limits PET payments to one scan per lifetime for patients enrolled in clinical trials. 

But the landscape began changing with the arrival of new Alzheimer’s treatments like Leqembi, approved in January 2023. CMS telegraphed its changing position in July, when it announced a review of the CED policy, and followed through with the change on October 13. The new policy…

  • Eliminates the requirement that patients be enrolled in clinical trials
  • Ends the limit of one PET scan per Alzheimer’s patient per lifetime
  • Allows Medicare Administrative Contractors (MACs) to make coverage decisions on Alzheimer’s PET
  • Rejects requests to have the policy applied retroactively, such as to when Leqembi was approved

CMS specifically cited the introduction of new anti-amyloid treatments as one of the reasons behind its change in policy. 

  • The lifetime limit is “outdated” and “not clinically appropriate” given the need for PET for both patient selection and to potentially discontinue treatment if it’s ineffective or if it’s worked to clear amyloid from the brain – a key need for such expensive therapies. 

The news was quickly applauded by groups like SNMMI and MITA, which have long advocated for looser reimbursement rules.

The Takeaway

The CMS decision is great news for the PET community as well as for patients facing a diagnosis of Alzheimer’s disease. The question remains as to what sort of reimbursement rates providers will see from the various MACs around the US, and whether commercial payers will follow suit.

Autonomous AI for Medical Imaging is Here. Should We Embrace It?

What is autonomous artificial intelligence, and is radiology ready for this new technology? In this paper, we explore one of the most exciting autonomous AI applications, ChestLink from Oxipit. 

What is Autonomous AI? 

Up to now, most interpretive AI solutions have focused on assisting radiologists with analyzing medical images. In this scenario, AI provides suggestions to radiologists and alerts them to suspicious areas, but the final diagnosis is the physician’s responsibility.

Autonomous AI flips the script by having AI run independently of the radiologist, such as by analyzing a large batch of chest X-ray exams for tuberculosis to screen out those certain to be normal. This can significantly reduce the primary care workload, where healthcare providers who offer preventive health checkups may see up to 80% of chest X-rays with no abnormalities. 

Autonomous AI frees the radiologist to focus on cases with suspicious pathology – with the potential of delivering a more accurate diagnosis to patients in real need.

One of the first of this new breed of autonomous AI is ChestLink from Oxipit. The solution received the CE Mark in March 2022, and more than a year later it is still the only AI application capable of autonomous performance. 

How ChestLink Works

ChestLink produces final chest X-ray reports on healthy patients with no involvement from human radiologists. The application only reports autonomously on chest X-ray studies where it is highly confident that the image does not include abnormalities. These studies are automatically removed from the reporting workflow. 

ChestLink enables radiologists to report on studies most likely to have abnormalities. In current clinical deployments, ChestLink automates 10-30% of all chest X-ray workflow. The exact percentage depends on the type of medical institution, with primary care facilities having the most potential for automation.

ChestLink Clinical Validation

ChestLink was trained on a dataset with over 500k images. In clinical validation studies, ChestLink consistently performed at 99%+ sensitivity.

A recent study published in Radiology highlighted the sensitivity of the application.

“The most surprising finding was just how sensitive this AI tool was for all kinds of chest disease. In fact, we could not find a single chest X-ray in our database where the algorithm made a major mistake. Furthermore, the AI tool had a sensitivity overall better than the clinical board-certified radiologists,” said study co-author Louis Lind Plesner, MD, from the Department of Radiology at the Herlev and Gentofte Hospital in Copenhagen, Denmark.

In this study ChestLink autonomously reported on 28% of all normal studies.

In another study at the Oulu University Hospital in Finland, researchers concluded that AI could reliably remove 36.4% of normal chest X-rays from the reporting workflow with a minimal number of false negatives, leading to effectively no compromise on patient safety. 

Safe Path to AI Autonomy

Oxipit ChestLink is currently used in healthcare facilities in the Netherlands, Finland, Lithuania, and other European countries, and is in the trial phase for deployment in one of the leading hospitals in England.

ChestLink follows a three-stage framework for clinical deployment.

  • Retrospective analysis. ChestLink analyzes a couple of years worth (100k+) of historic chest x-ray studies at the medical institution. In this analysis the product is validated on real-world data. It also realistically estimates what fraction of reporting scope can be automated.
  • Semi-autonomous operations. The application moves into prospective settings, analyzing images in near-real time. ChestLink produces preliminary reports for healthy patients, which may then be approved by a certified clinician.
  • Autonomous operations. The application autonomously reports on high-confidence healthy patient studies. The application performance is monitored in real-time with analytical tools.

Are We There Yet?

ChestLink aims to address the shortage of clinical radiologists worldwide, which has led to a substantial decline in care quality.

In the UK, the NHS currently faces a massive 33% shortfall in its radiology workforce. Nearly 71% of clinical directors of UK radiology departments feel that they do not have a sufficient number of radiologists to deliver safe and effective patient care.

ChestLink offers a safe pathway into autonomous operations by automating a significant and somewhat mundane portion of radiologist workflow without any negative effects for patient care. 

So should we embrace autonomous AI? The real question should be, can we afford not to? 

Making Screening Better

While population-based cancer screening has demonstrated its value, there’s no question that screening could use improvement. Two new studies this week show how to improve on one of screening’s biggest challenges: getting patients to attend their follow-up exams.

In the first study in JACR, researchers from the University of Rochester wanted to see if notifying people about actionable findings shortly after screening exams had an impact on follow-up rates. Patients were notified within one to three weeks after the radiology report was completed. 

They also examined different methods for patient communication, including snail-mail letters, notifications from Epic’s MyChart electronic patient portal, and phone calls. In approximately 2.5k patients within one month of due date, they found that follow-up adherence rates varied for each outreach method as follows:

  • Phone calls – 60%
  • Letters – 57%
  • Controls – 53%
  • MyChart notifications – 36%

(The researchers noted that the COVID-19 pandemic may have disproportionately affected those in the MyChart group.) 

Fortunately, the university uses natural language processing-based software called Backstop to make sure no follow-up recommendations fall through the cracks. 

  • Backstop includes Nuance’s mPower technology to identify actionable findings from unstructured radiology reports; it triggers notifications to both primary care providers and patients about the need to complete follow-up.

Once the full round of Backstop notifications had taken place, compliance rates rose and there was no statistically significant difference between how patients got the early notification: letter (89%), phone (91%), MyChart (90%), and control (88%). 

In the second study, researchers in JAMA described how they used automated algorithms to analyze EHR data from 12k patients to identify those eligible for follow-up for cancer screening exams.

  • They then tested three levels of intervention to get people to their exams, ranging from EHR reminders to outreach to patient navigation to all three. 

Patients who got EHR reminders, outreach, and navigation or EHR reminders and outreach had the highest follow-up completion rates at 120 days compared to usual care (31% for both vs. 23%). Rates were similar to usual care for those who only got EHR reminders (23%).

The Takeaway

This week’s studies indicate that while health technology is great, it’s how you use it that matters. While IT tools can identify the people who need follow-up, it’s up to healthcare personnel to make sure patients get the care they need.

AI Tug of War Continues

The ongoing tug of war over AI’s value to radiology continues. This time the rope has moved in AI’s favor with publication of a new study in JAMA Network Open that shows the potential of a new type of AI language model for creating radiology reports.

  • Headlines about AI have ping-ponged in recent weeks, from positive studies like MASAI and PERFORMS to more equivocal trials like a chest X-ray study in Radiology and news from the UK that healthcare authorities may not be ready for chest X-ray AI’s full clinical roll-out. 

In the new paper, Northwestern University researchers tested a chest X-ray AI algorithm they developed with a transformer technique, a type of generative AI language model that can both analyze images and generate radiology text as output. 

  • Transformer language models show promise due to their ability to combine both image and non-image data, as researchers showed in a paper last week.

The Northwestern researchers tested their transformer model in 500 chest radiographs of patients evaluated overnight in the emergency department from January 2022 to January 2023. 

Reports generated by AI were then compared to reports from a teleradiologist as well as the final report by an in-house radiologist, which was set as the gold standard. The researchers found that AI-generated reports …

  • Had sensitivity a bit lower than teleradiology reports (85% vs. 92%)
  • Had specificity a bit higher (99% vs. 97%)
  • In some cases improved on the in-house radiology report by detecting subtle abnormalities missed by the radiologist

Generative AI language models like the Northwestern algorithm could perform better than algorithms that rely on a classification approach to predicting the presence of pathology. Such models limit medical diagnoses to yes/no predictions that may omit context that’s relevant to clinical care, the researchers believe. 

In real-world clinical use, the Northwestern team thinks their model could assist emergency physicians in circumstances where in-house radiologists or teleradiologists aren’t immediately available, helping triage emergent cases.

The Takeaway

After the negative headlines of the last few weeks, it’s good to see positive news about AI again. Although the current study is relatively small and much larger trials are needed, the Northwestern research has promising implications for the future of transformer-based AI language models in radiology.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!