Making predictions is a messy business (just ask Geoffrey Hinton). So we’re always appreciative whenever key opinion leaders stick their necks out to offer thoughts on where radiology is headed and the major trends that will shape the specialty’s future.
Two of radiology’s top thought leaders on AI and imaging informatics – Curtis Langlotz, MD, PhD, and Paul Chang, MD – gaze into the crystal ball in two articles published this week in Radiologyas part of the journal’s centennial celebration.
Virtual assistants will help radiologists draft reports – and reduce burnout
Radiology workstations will become cloud-based cockpits that seamlessly unify image display, reporting, and AI
Large language models like ChatGPT will help patients better understand their radiology reports
The FDA will reform its regulation of AI to be more flexible and speed AI authorizations (see our article in The Wire below)
Large databases like the Medical Imaging and Data Resource Center (MIDRC) will spur data sharing and, in turn, more rapid AI development
Langlotz’s predictions are echoed by Chang’s accompanying article in Radiology in which he predicts the future of imaging informatics in the coming age. Like Langlotz, Chang sees the new array of AI-enabled tools as beneficial agents that will help radiologists manage growing workloads through dashboards, enhanced radiology reports, and workflow automation.
The Takeaway
This week’s articles are required reading for anyone following the meteoric growth of AI in radiology. Far from Hinton’s dystopian view of a world without radiologists, Langlotz and Chang predict a future in which AI and IT technologies assist radiologists to do their jobs better and with less stress. We know which vision we prefer.
In the previous issue of The Imaging Wire, we discovered how venture capital investment in AI developers is fueling rapid growth in new AI applications for radiologists (despite a slowdown this year).
This trend was underscored late last week with new data from the FDA showing strong growth in the number of regulatory authorizations of AI and machine learning-enabled devices in calendar 2023 compared to the year before. The findings show:
A resurgence of AI/ML authorizations this year, with over 30% growth compared to 14% in 2022 and 15% in 2021 – The last time authorizations grew this fast was in 2020 (+39%)
The FDA authorized 171 AI/ML-enabled devices in the past year. Of the total, 155 had final decision dates between August 1, 2022 to July 30, 2023, while 16 were reclassifications from prior periods
Devices intended for radiology made up 79% of the total (122/155), an impressive number but down slightly compared to 87% in 2022
Other medical specialities include cardiology (9%), neurology (5%), and gastroenterology/urology (4%)
One interesting wrinkle in the report was the fact that despite all the buzz around large language models for generative AI, the FDA has yet to authorize a device that uses generative AI or that is powered by LLMs.
The Takeaway
The FDA’s new report confirms that radiology AI shows no sign of slowing down, despite a drop in AI investment this year.
The data also offer perspective on a JACR report last week predicting that by 2035 radiology could be seeing 350 new AI/ML product approvals for the year. Product approvals would only have to grow at about a 10% annual rate to hit that number – a figure that seems perfectly achievable given the new FDA report.
It’s no secret that the rapid growth of AI in radiology is being fueled by venture capital firms eager to see a payoff for early investments in startup AI developers. But are there signs that VCs’ appetite for radiology AI is starting to wane?
Maybe. And maybe not. While one new analysis shows that AI investments slowed in 2023 compared to the year before, another predicts that over the long term, VC investing will spur a boom in AI development that is likely to transform radiology.
First up is an update by Signify Research to its ongoing analysis of VC funding. The new numbers show that through Q3 2023, the number of medical imaging AI deals has fallen compared to Q3 2022 (24 vs. 40).
Total funding has also fallen for the second straight year, to $501M year-to-date in 2023. That compares to $771M through the third quarter of 2022, and $1.1B through the corresponding quarter of 2021.
On the other hand, the average deal size has grown to an all-time high of $20.9M, compared to 2022 ($15.4M) and 2021 ($18M).
And one company – Rapid AI – joined the exclusive club of just 14 AI vendors that have raised over $100M with a $75M Series C round in July 2023.
In a look forward at AI’s future, a new analysis in JACRby researchers from the ACR Data Science Institute (DSI) directly ties VC funding to healthcare AI software development, predicting that every $1B in funding translates into 11 new product approvals, with a six-year lag between funding and approval.
And the authors forecast long-term growth: In 2022 there were 69 FDA-approved products, but by 2035, funding is expected to reach $31B for the year, resulting in the release of a staggering 350 new AI products that year.
Further, the ACR DSI authors see a virtuous cycle developing, as increasing AI adoption spurs more investment that creates more products available to help radiologists with their workloads.
The Takeaway
The numbers from Signify and ACR DSI don’t match up exactly, but together they paint a picture of a market segment that continues to enjoy massive VC investment. While the precise numbers may fluctuate year to year, investor interest in medical imaging AI will fuel innovation that promises to transform how radiology is practiced in years to come.
What is autonomous artificial intelligence, and is radiology ready for this new technology? In this paper, we explore one of the most exciting autonomous AI applications, ChestLink from Oxipit.
What is Autonomous AI?
Up to now, most interpretive AI solutions have focused on assisting radiologists with analyzing medical images. In this scenario, AI provides suggestions to radiologists and alerts them to suspicious areas, but the final diagnosis is the physician’s responsibility.
Autonomous AI flips the script by having AI run independently of the radiologist, such as by analyzing a large batch of chest X-ray exams for tuberculosis to screen out those certain to be normal. This can significantly reduce the primary care workload, where healthcare providers who offer preventive health checkups may see up to 80% of chest X-rays with no abnormalities.
Autonomous AI frees the radiologist to focus on cases with suspicious pathology – with the potential of delivering a more accurate diagnosis to patients in real need.
One of the first of this new breed of autonomous AI is ChestLink from Oxipit. The solution received the CE Mark in March 2022, and more than a year later it is still the only AI application capable of autonomous performance.
How ChestLink Works
ChestLink produces final chest X-ray reports on healthy patients with no involvement from human radiologists. The application only reports autonomously on chest X-ray studies where it is highly confident that the image does not include abnormalities. These studies are automatically removed from the reporting workflow.
ChestLink enables radiologists to report on studies most likely to have abnormalities. In current clinical deployments, ChestLink automates 10-30% of all chest X-ray workflow. The exact percentage depends on the type of medical institution, with primary care facilities having the most potential for automation.
ChestLink Clinical Validation
ChestLink was trained on a dataset with over 500k images. In clinical validation studies, ChestLink consistently performed at 99%+ sensitivity.
“The most surprising finding was just how sensitive this AI tool was for all kinds of chest disease. In fact, we could not find a single chest X-ray in our database where the algorithm made a major mistake. Furthermore, the AI tool had a sensitivity overall better than the clinical board-certified radiologists,” said study co-author Louis Lind Plesner, MD, from the Department of Radiology at the Herlev and Gentofte Hospital in Copenhagen, Denmark.
In this study ChestLink autonomously reported on 28% of all normal studies.
In another study at the Oulu University Hospital in Finland, researchers concluded that AI could reliably remove 36.4% of normal chest X-rays from the reporting workflow with a minimal number of false negatives, leading to effectively no compromise on patient safety.
Safe Path to AI Autonomy
Oxipit ChestLink is currently used in healthcare facilities in the Netherlands, Finland, Lithuania, and other European countries, and is in the trial phase for deployment in one of the leading hospitals in England.
ChestLink follows a three-stage framework for clinical deployment.
Retrospective analysis. ChestLink analyzes a couple of years worth (100k+) of historic chest x-ray studies at the medical institution. In this analysis the product is validated on real-world data. It also realistically estimates what fraction of reporting scope can be automated.
Semi-autonomous operations. The application moves into prospective settings, analyzing images in near-real time. ChestLink produces preliminary reports for healthy patients, which may then be approved by a certified clinician.
Autonomous operations. The application autonomously reports on high-confidence healthy patient studies. The application performance is monitored in real-time with analytical tools.
Are We There Yet?
ChestLink aims to address the shortage of clinical radiologists worldwide, which has led to a substantial decline in care quality.
In the UK, the NHS currently faces a massive 33% shortfall in its radiology workforce. Nearly 71% of clinical directors of UK radiology departments feel that they do not have a sufficient number of radiologists to deliver safe and effective patient care.
ChestLink offers a safe pathway into autonomous operations by automating a significant and somewhat mundane portion of radiologist workflow without any negative effects for patient care.
So should we embrace autonomous AI? The real question should be, can we afford not to?
The ongoing tug of war over AI’s value to radiology continues. This time the rope has moved in AI’s favor with publication of a new study in JAMA Network Open that shows the potential of a new type of AI language model for creating radiology reports.
Headlines about AI have ping-ponged in recent weeks, from positive studies like MASAI and PERFORMS to more equivocal trials like a chest X-ray study in Radiology and news from the UK that healthcare authorities may not be ready for chest X-ray AI’s full clinical roll-out.
In the new paper, Northwestern University researchers tested a chest X-ray AI algorithm they developed with a transformer technique, a type of generative AI language model that can both analyze images and generate radiology text as output.
Transformer language models show promise due to their ability to combine both image and non-image data, as researchers showed in a paper last week.
The Northwestern researchers tested their transformer model in 500 chest radiographs of patients evaluated overnight in the emergency department from January 2022 to January 2023.
Reports generated by AI were then compared to reports from a teleradiologist as well as the final report by an in-house radiologist, which was set as the gold standard. The researchers found that AI-generated reports …
Had sensitivity a bit lower than teleradiology reports (85% vs. 92%)
Had specificity a bit higher (99% vs. 97%)
In some cases improved on the in-house radiology report by detecting subtle abnormalities missed by the radiologist
Generative AI language models like the Northwestern algorithm could perform better than algorithms that rely on a classification approach to predicting the presence of pathology. Such models limit medical diagnoses to yes/no predictions that may omit context that’s relevant to clinical care, the researchers believe.
In real-world clinical use, the Northwestern team thinks their model could assist emergency physicians in circumstances where in-house radiologists or teleradiologists aren’t immediately available, helping triage emergent cases.
The Takeaway
After the negative headlines of the last few weeks, it’s good to see positive news about AI again. Although the current study is relatively small and much larger trials are needed, the Northwestern research has promising implications for the future of transformer-based AI language models in radiology.
In another blow to radiology AI, the UK’s national technology assessment agency issued an equivocal report on AI for chest X-ray, stating that more research is needed before the technology can enter routine clinical use.
The report came from the National Institute for Health and Care Excellence (NICE), which assesses new health technologies that have the potential to address unmet NHS needs.
The NHS sees AI as a potential solution to its challenge of meeting rising demand for imaging services, a dynamic that’s leading to long wait times for exams.
But at least some corners of the UK health establishment have concerns about whether AI for chest X-ray is ready for prime time.
The NICE report states that – despite the unmet need for quicker chest X-ray reporting – there is insufficient evidence to support the technology, and as such it’s not possible to assess its clinical and cost benefits. And it said there is “no evidence” on the accuracy of AI-assisted clinician review compared to clinicians working alone.
As such, the use of AI for chest X-ray in the NHS should be limited to research, with the following additional recommendations …
Centers already using AI software to review chest X-rays may continue to do so, but only as part of an evaluation framework and alongside clinician review
Purchase of chest X-ray AI software should be made through corporate, research, or non-core NHS funding
More research is needed on AI’s impact on a number of outcomes, such as CT referrals, healthcare costs and resource use, review and reporting time, and diagnostic accuracy when used alongside clinician review
The NICE report listed 14 commercially available chest X-ray algorithms that need more research, and it recommended prospective studies to address gaps in evidence. AI developers will be responsible for performing these studies.
The Takeaway
Taken with last week’s disappointing news on AI for radiology, the NICE report is a wakeup call for what had been one of the most promising clinical use cases for AI. The NHS had been seen as a leader in spearheading clinical adoption of AI; for chest X-ray, clinicians in the UK may have to wait just a bit longer.
There’s no question AI is the future of radiology. But AI’s drive to widespread clinical use is going to hit some speed bumps along the way.
This week is a case in point. Two studies were published showing AI’s limitations and underscoring the challenges faced in making AI an everyday clinical reality.
In the first study, researchers found that radiologists outperformed four commercially available AI algorithms for analyzing chest X-rays (Annalise.ai, Milvue, Oxipit, and Siemens Healthineers) in a study of 2k patients in Radiology.
Researchers from Denmark found the AI tools had moderate to high sensitivity for three detection tasks:
airspace disease (72%-91%)
pneumothorax (63%-90%)
pleural effusion (62%-95%).
But the algorithms also had higher false-positive rates and performance dropped in cases with smaller pathology and multiple findings. The findings are disappointing, especially since they got such widespread play in the mainstream media.
But this week’s second study also brought worrisome news, this time in Radiology: Artificial Intelligence about an AI training method called foundation models that many hope holds the key to better algorithms.
Foundation models are designed to address the challenge of finding enough high-quality data for AI training. Most algorithms are trained with actual de-identified clinical data that have been labeled and referenced to ground truth; foundation models are AI neural networks pre-trained with broad, unlabeled data and then fine-tuned with smaller volumes of more detailed data to perform specific tasks.
Researchers in the new study found that a chest X-ray algorithm trained on a foundation model with 800k images had lower performance than an algorithm trained with the CheXpert reference model in a group of 42.9k patients. The foundation model’s performance lagged for four possible results – no finding, pleural effusion, cardiomegaly, and pneumothorax – as follows…
Lower by 6.8-7.7% in females for the “no finding” result
Down by 10.7-11.6% in Black patients in detecting pleural effusion
Lower performance across all groups for classifying cardiomegaly
The decline in female and Black patients is particularly concerning given recent studies on bias and lack of generalizability for AI.
The Takeaway
This week’s studies show that there’s not always going to be a clear road ahead for AI in its drive to routine clinical use. The study on foundation models in particular could have ramifications for AI developers looking for a shortcut to faster algorithm development. They may want to slow their roll.
How can you predict whether an AI algorithm will fall short for a particular clinical use case such as detecting cancer? Researchers in Radiologytook a crack at this conundrum by developing what they call an “uncertainty quantification” metric to predict when an AI algorithm might be less accurate.
AI is rapidly moving into wider clinical use, with a number of exciting studies published in just the last few months showing how AI can help radiologists interpret screening mammograms or direct which women should get supplemental breast MRI.
But AI isn’t infallible. And unlike a human radiologist who might be less confident in a particular diagnosis, an AI algorithm doesn’t have a built-in hedging mechanism.
So researchers from Denmark and the Netherlands decided to build one. They took publicly available AI algorithms and tweaked their code so they produced “uncertainty quantification” scores with their predictions.
They then tested how well the scores predicted AI performance in a dataset of 13k images for three common tasks covering some of the deadliest types of cancer:
1) detecting pancreatic ductal adenocarcinoma on CT 2) detecting clinically significant prostate cancer on MRI 3) predicting pulmonary nodule malignancy on low-dose CT
Researchers classified the highest 80% of the AI predictions as “certain,” and the remaining 20% as “uncertain,” and compared AI’s accuracy in both groups, finding …
AI led to significant accuracy improvements in the “certain” group for pancreatic cancer (80% vs. 59%), prostate cancer (90% vs. 63%), and pulmonary nodule malignancy prediction (80% vs. 51%)
AI accuracy was comparable to clinicians when its predictions were “certain” (80% vs. 78%, P=0.07), but much worse when “uncertain” (50% vs. 68%, P<0.001)
Using AI to triage “uncertain” cases produced overall accuracy improvements for pancreatic and prostate cancer (+5%) and lung nodule malignancy prediction (+6%) compared to a no-triage scenario
How would uncertainty quantification be used in clinical practice? It could play a triage role, deprioritizing radiologist review of easier cases while helping them focus on more challenging studies. It’s a concept similar to the MASAI study of mammography AI.
The Takeaway
Like MASAI, the new findings present exciting new possibilities for AI implementation. They also present a framework within which AI can be implemented more safely by alerting clinicians to cases in which AI’s analysis might fall short – and enabling humans to step in and pick up the slack.
A deep learning algorithm trained to analyze mammography images did a better job than traditional risk models in predicting breast cancer risk. The study shows the AI model could direct the use of supplemental screening breast MRI for women who need it most.
Breast MRI has emerged (along with ultrasound) as one of the most effective imaging modalities to supplement conventional X-ray-based mammography. Breast MRI performs well regardless of breast tissue density, and can even be used for screening younger high-risk women for whom radiation is a concern.
But there are also disadvantages to breast MRI. It’s expensive and time-consuming, and clinicians aren’t always sure which women should get it. As a result, breast MRI is used too often in women at average risk and not often enough in those at high risk.
In the current study in Radiology, researchers from MGH compared the Mirai deep learning algorithm to conventional risk-prediction models. Mirai was developed at MIT to predict five-year breast cancer risk, and the first papers on the model emerged in 2019; previous studies have already demonstrated the algorithm’s prowess for risk prediction.
Mirai was used to analyze mammograms and develop risk scores for 2.2k women who also received 4.2k screening breast MRI exams from 2017-2020 at four facilities. Researchers then compared the performance of the algorithm to traditional risk tools like Tyrer-Cuzick and NCI’s Breast Cancer Risk Assessment (BCRAT), finding that …
In women Mirai identified as high risk, the cancer detection rate per 1k on breast MRI was far higher compared to those classified as high risk by Tyrer-Cuzick and BCRAT (20.6 vs. 6.0 & 6.8)
Mirai had a higher PPV for predicting abnormal findings on breast MRI screening (14.6% vs. 5.0% & 5.5%)
Mirai scored higher in PPV of biopsies recommended (32.4% vs. 12.7% & 11.1%) and PPV for biopsies performed (36.4% vs. 13.5% & 12.5%)
The Takeaway Breast imaging has become one of the AI use cases with the most potential, based on recent studies like PERFORMS and MASAI, and the new study shows Mirai could be useful in directing women to breast MRI screening. Like the previous studies, the current research is pointing to a near-term future in which AI and deep learning can make breast screening more accurate and cost-effective than it’s ever been before.
A new article in JACRhighlights the economic barriers that are limiting wider adoption of AI in healthcare in the US. The study paints a picture of how the complex nature of Medicare reimbursement puts the country at risk of falling behind other nations in the quest to implement healthcare AI on a national scale.
The success of any new medical technology in the US has always been linked to whether physicians can get reimbursed for using it. But there are a variety of paths to reimbursement in the Medicare system, each one with its own rules and idiosyncrasies.
The establishment of the NTAPprogram was thought to be a milestone in paying for AI for inpatients, for example, but the JACR authors note that NTAP payments are time-limited for no more than three years. A variety of other factors are limiting AI reimbursement, including …
All of the AI payments approved under the NTAP program have expired, and as such no AI algorithm is being reimbursed under NTAP
Budget-neutral requirements in the Medicare Physician Fee Schedule mean that AI reimbursement is often a zero-sum game. Payments made for one service (such as AI) must be offset by reductions for something else
Only one imaging AI algorithm has successfully navigated CMS to achieve Category I reimbursement in the Physician Fee Schedule, starting in 2024 for fractional flow reserve (FFR) analysis
Standing in stark contrast to the Medicare system is the NHS in the UK, where regulators see AI as an invaluable tool to address chronic workforce shortages in radiology and are taking aggressive action to promote its adoption. Not only has NHS announced a £21M fund to fuel AI adoption, but it is mulling the implementation of a national platform to enable AI algorithms to be accessed within standard radiology workflow.
The Takeaway
The JACR article illustrates how Medicare’s Byzantine reimbursement structure puts barriers in the path of wider AI adoption. Although there have been some reimbursement victories such as NTAP, these have been temporary, and the fact that only one radiology AI algorithm has achieved a Category I CPT code must be a sobering thought to AI proponents.
Get every issue of The Imaging Wire, delivered right to your inbox.