Economic Barriers to AI

A new article in JACR highlights the economic barriers that are limiting wider adoption of AI in healthcare in the US. The study paints a picture of how the complex nature of Medicare reimbursement puts the country at risk of falling behind other nations in the quest to implement healthcare AI on a national scale. 

The success of any new medical technology in the US has always been linked to whether physicians can get reimbursed for using it. But there are a variety of paths to reimbursement in the Medicare system, each one with its own rules and idiosyncrasies. 

The establishment of the NTAP program was thought to be a milestone in paying for AI for inpatients, for example, but the JACR authors note that NTAP payments are time-limited for no more than three years. A variety of other factors are limiting AI reimbursement, including … 

  • All of the AI payments approved under the NTAP program have expired, and as such no AI algorithm is being reimbursed under NTAP 
  • Budget-neutral requirements in the Medicare Physician Fee Schedule mean that AI reimbursement is often a zero-sum game. Payments made for one service (such as AI) must be offset by reductions for something else 
  • Only one imaging AI algorithm has successfully navigated CMS to achieve Category I reimbursement in the Physician Fee Schedule, starting in 2024 for fractional flow reserve (FFR) analysis

Standing in stark contrast to the Medicare system is the NHS in the UK, where regulators see AI as an invaluable tool to address chronic workforce shortages in radiology and are taking aggressive action to promote its adoption. Not only has NHS announced a £21M fund to fuel AI adoption, but it is mulling the implementation of a national platform to enable AI algorithms to be accessed within standard radiology workflow. 

The Takeaway

The JACR article illustrates how Medicare’s Byzantine reimbursement structure puts barriers in the path of wider AI adoption. Although there have been some reimbursement victories such as NTAP, these have been temporary, and the fact that only one radiology AI algorithm has achieved a Category I CPT code must be a sobering thought to AI proponents.

Fine-Tuning Cardiac CT

CT has established itself as an excellent cardiac imaging modality. But there can still be some fine-tuning in terms of exactly how and when to use it, especially for assessing people presenting with chest pain. 

Two studies in JAMA Cardiology tackle this head-on, presenting new evidence that supports a more conservative – and precise – approach to determining which patients get follow-up testing. The studies also address concerns that using coronary CT angiography (CCTA) as an initial test before invasive catheterization could lead to unnecessary testing.

In the PRECISE study, researchers analyzed 2.1k patients from 2018 to 2021 who had stable symptoms of suspected coronary artery disease (CAD). Patients were randomized to a usual testing strategy (such as cardiac SPECT or stress echo), or a precision strategy that employed CCTA with selected fractional flow reserve CT (FFR-CT). 

The precision strategy group was further subdivided into a subgroup of those at minimal risk of cardiac events (20%) for whom testing was deferred to see if utilization could be reduced even further. In the precision strategy group….

  • Rates of invasive catheterization without coronary obstruction were lower (4% vs. 11%)
  • Testing was lower versus the usual testing group (84% vs. 94%)
  • Positive tests were more common (18% vs. 13%)
  • 64% of the deferred-testing subgroup got no testing at all
  • Adverse events were higher, but the difference was not statistically significant

To expand on the analysis, JAMA Cardiology published a related study that further investigated the safety of the deferred-testing strategy at one-year follow-up. Researchers compared adverse events in the deferred testing group to those who got the usual testing strategy, finding that the deferred testing group had…

  • A lower incidence rate of adverse events (0.9 vs. 5.9)
  • A lower rate of invasive cardiac cath without obstructive CAD per 100 patient years (1.0 vs. 6.5)

The results from both studies show that a strategy of deferring testing for low-risk CAD patients while sending higher-risk patients to CCTA and FFR-CT is clinically effective with no adverse impact on patient safety.

The Takeaway
The new findings don’t take any of the luster off cardiac CT; they simply add to the body of knowledge demonstrating when to use – and not to use – this incredibly powerful tool for directing patient care. And in the emerging era of precision medicine, that’s what it’s all about.

Radiation and Cancer Risk

New research on the cancer risk of low-dose ionizing radiation could have disturbing implications for those who are exposed to radiation on the job – including medical professionals. In a new study in BMJ, researchers found that nuclear workers exposed to occupational levels of radiation had a cancer mortality risk that was higher than previously estimated.

The link between low-dose radiation and cancer has long been controversial. Most studies on the radiation-cancer connection are based on Japanese atomic bomb survivors, many of whom were exposed to far higher levels of radiation than most people receive over their lifetimes – even those who work with ionizing radiation. 

The question is whether that data can be extrapolated to people exposed to much lower levels of radiation, such as nuclear workers, medical professionals, or even patients. To that end, researchers in the International Nuclear Workers Study (INWORKS) have been tracking low-dose radiation exposure and its connection to mortality in nearly 310k people in France, the UK, and the US who worked in the nuclear industry from 1944 to 2016.

INWORKS researchers previously published studies showing low-dose radiation exposure to be carcinogenic, but the new findings in BMJ offer an even stronger link. For the study, researchers tracked radiation exposure based on dosimetry badges worn by the workers and then rates of cancer mortality, and calculated rates of death from solid cancer based on their exposure levels, finding: 

  • Mortality risk was higher for solid cancers, at 52% per 1 Gy of exposure
  • Individuals who received the occupational radiation limit of 20 mSv per year would have a 5.2% increased solid cancer mortality rate over five years
  • There was a linear association between low-dose radiation exposure and cancer mortality, meaning that cancer mortality risk was also found at lower levels of exposure 
  • The dose-response association seen the study was even higher than in studies of atomic bomb survivors (52% vs. 32%)

The Takeaway

Even though the INWORKS study was conducted on nuclear workers rather than medical professionals, the findings could have implications for those who might be exposed to medical radiation, such as interventional radiologists and radiologic technologists. The study will undoubtedly be examined by radiation protection organizations and government regulators; the question is whether it leads to any changes in rules on occupational radiation exposure.

How Vendors Sell AI

Better patient care is the main selling point used by AI vendors when marketing neuroimaging algorithms, followed closely by time savings. Farther down the list of benefits are lower costs and increased revenue for providers. 

So says a new analysis in JACR that takes a close look at how FDA-cleared neuroimaging AI algorithms are marketed by vendors. It also includes several warning signs for both AI developers and clinicians.

AI is the most exciting technology to arrive in healthcare in decades, but questions percolate on whether AI developers are overhyping the technology. In the new analysis, researchers focused on marketing claims made for 59 AI neuroimaging algorithms cleared by the FDA from 2008 to 2022. Researchers analyzed FDA summaries and vendor websites, finding:

  • For 69% of algorithms, vendors highlighted an improvement in quality of patient care, while time savings for clinicians were touted for 44%. Only 16% of algorithms were promoted as lowering costs, while just 11% were positioned as increasing revenue
  • 50% of cleared neuroimaging algorithms were related to detection or quantification of stroke; of these, 41% were for intracranial hemorrhage, 31% for stroke brain perfusion, and 24% for detection of large vessel occlusion 
  • 41% of the algorithms were intended for use with non-contrast CT scans, 36% with MRI, 15% with CT perfusion, 14% with CT angiography, and the rest with MR perfusion and PET
  • 90% of the algorithms studied were cleared in the last five years, and 42% since last year

The researchers further noted two caveats in AI marketing: 

  • There is a lack of publicly available data to support vendor claims about the value of their algorithms. Better transparency is needed to create trust and clinician engagement.
  • The single-use-case nature of many AI algorithms raises questions about their economic viability. Many different algorithms would have to be implemented at a facility to ensure “a reasonable breadth of triage” for critical findings, and the financial burden of such integration is unclear.

The Takeaway

The new study offers intriguing insights into how AI algorithms are marketed by vendors, and how these efforts could be perceived by clinicians. The researchers note that financial pressure on AI developers may cause them to make “unintentional exaggerated claims” to recoup the cost of development; it is incumbent upon vendors to scrutinize their marketing activities to avoid overhyping AI technology.

Are Doctors Overpaid?

A new study on physician salaries is raising pointed questions about pay for US physicians and whether it contributes to rising healthcare costs – that is, if you believe the numbers are accurate. 

The study was released in July by the National Bureau of Economic Research (NBER), which produces in-depth reports on a variety of topics. 

The current paper is highly technical and may have languished in obscurity were it not for an August 4 article in The Washington Post that examined the findings with the claim that “doctors make more than anyone thought.”

It is indeed true that the NBER’s estimate of physician salaries seems high. The study claims US physicians made an average of $350k in 2017, the year that the researchers focused on by analyzing federal tax records. 

  • The NBER estimate is far higher than $294k in Medscape’s 2017 report on physician compensation – a 19% difference. 

The variation is even greater for diagnostic radiologists. The NBER data claim radiologists had a median annual salary in 2017 of $546k – 38% higher than the $396k average salary listed in Medscape’s 2017 report. 

  • The NBER numbers from six years ago are even higher than 2022/2023 numbers for radiologist salaries in several recent reports, by Medscape ($483k), Doximity ($504k), and Radiology Business ($482k). 

But the NBER researchers claim that by analyzing tax data rather than relying on self-reported earnings, their data are more accurate than previous studies, which they believe underestimate physician salaries by as much as 25%. 

  • They also estimate that physician salaries make up about 9% of total US healthcare costs.

What difference is it how much physicians make? The WaPo story sparked a debate with 6.1k comments so far, with many readers accusing doctors of contributing to runaway healthcare costs in the US.

  • Meanwhile, a thread in the AuntMinnie forums argued whether the NBER numbers were accurate, with some posters warning that the figures could lead to additional cuts in Medicare payments for radiologists. 

The Takeaway

Lost in the debate over the NBER report is its finding that physician pay makes up only 9% of US healthcare costs. In a medical system that’s rife with overutilization, administrative costs, and duplicated effort across fragmented healthcare networks, physician salaries should be the last target for those who actually want to cut healthcare spending. 

Grading AI Report Quality

One of the most exciting new use cases for medical AI is in generating radiology reports. But how can you tell whether the quality of a report generated by an AI algorithm is comparable to that of a radiologist?

In a new study in Patterns, researchers propose a technical framework for automatically grading the output of AI-generated radiology reports, with the ultimate goal of producing AI-generated reports that are indistinguishable from those of radiologists. 

Most radiology AI applications so far have focused on developing algorithms to identify individual pathologies on imaging exams. 

  • While this is useful, helping radiologists streamline the production of their main output – the radiology report – could have a far greater impact on their productivity and efficiency. 

But existing tools for measuring the quality of AI-generated narrative reports are limited and don’t match up well with radiologists’ evaluations. 

  • To improve that situation, the researchers applied several existing automated metrics for analyzing report quality and compared them to the scores of radiologists, seeking to better understand AI’s weaknesses. 

Not surprisingly, the automated metrics fell short in several ways, including false prediction of findings, omitting findings, and incorrectly locating and predicting the severity of findings. 

  • These shortcomings point out the need for better scoring systems for gauging AI performance. 

The researchers therefore proposed a new metric for grading AI-generated report quality, called RadGraph F1, and a new methodology, RadCliQ, to predict how well an AI report would measure up to radiologist scrutiny. 

  • RadGraph F1 and RadCliQ could be used in future research on AI-generated radiology reports, and to that end the researchers have made the code for both metrics available as open source.

Ultimately, the researchers see the construction of generalist medical AI models that could perform multiple complex tasks, such as conversing with radiologists and physicians about medical images. 

  • Another use case could be applications that are able to explain imaging findings to patients in everyday language. 

The Takeaway

It’s a complex and detailed paper, but the new study is important because it outlines the metrics that can be used to teach machines how to generate better radiology reports. Given the imperative to improve radiologist productivity in the face of rising imaging volume and workforce shortages, this could be one more step on the quest for the Holy Grail of AI in radiology.

Mammography AI’s Leap Forward

A new study out of Sweden offers a resounding vote of confidence in the use of AI for analyzing screening mammograms. Published in The Lancet Oncology, researchers found that AI cut radiologist workload almost by half without affecting cancer detection or recall rates.

AI has been promoted as the technology that could save radiology from rising imaging volumes, growing burnout, and pressure to perform at a higher level with fewer resources. But many radiology professionals remember similar promises made in the 1990s around computer-aided detection (CAD), which failed to live up to the hype.

Breast screening presents a particular challenge in Europe, where clinical guidelines call for all screening exams to be double-read by two radiologists – leading to better sensitivity but also imposing a higher workload. AI could help by working as a triage tool, enabling radiologists to only double-read those cases most likely to have cancer.

In the MASAI study, researchers are assessing AI for breast screening in 100k women in a population-based screening program in Sweden, with mammograms being analyzed by ScreenPoint’s Transpara version 1.7.0 software. In an in-progress analysis, researchers looked at results for 80k mammography-eligible women ages 40-80. 

The Transpara software applies a 10-point score to mammograms; in MASAI those scored 1-9 are read by a single radiologist, while those scored 10 are read by two breast radiologists. This technique was compared to double-reading, finding that:

  • AI reduced the mammography reading workload by almost 37k screening mammograms, or 44%
  • AI had a higher cancer detection rate per 1k screened participants (6.1 vs. 5.1) although the difference was not statistically significant (P=0.052)
  • Recall rates were comparable (2.2% vs. 2.0%)

The results demonstrate the safety of using AI as a triage tool, and the MASAI researchers plan to continue the study until it reaches 100k participants so they can measure the impact of AI on detection of interval cancers – cancers that appear between screening rounds.

The Takeaway

It’s hard to overestimate the MASAI study’s significance. The findings strongly support what AI proponents have been saying all along – that AI can save radiologists time while maintaining diagnostic performance. The question is the extent to which the MASAI results will apply outside of the double-reading environment, or to other clinical use cases.

Breast Ultrasound Gets Wearable

Wearable devices are all the rage in personal fitness – could wearable breast ultrasound be next? MIT researchers have developed a patch-sized wearable breast ultrasound device that’s small enough to be incorporated into a bra for early cancer detection. They described their work in a new paper in Science Advances.

This isn’t the first use of wearable ultrasound. In fact, earlier this year UCSD researchers revealed their work on a wearable cardiac ultrasound device that obtains real-time data on cardiac function. 

The MIT team’s concept expands the idea into cancer detection. They took advantage of previous work on conformable piezoelectric ultrasound transducer materials to develop cUSBr-Patch, a one-dimensional phased-array probe integrated into a honeycomb-shaped patch that can be inserted into a soft fabric bra. 

The array covers the entire breast surface and can acquire images from multiple angles and views using 64 elements at a 7MHz frequency. The honeycomb design means that the array can be rotated and moved into different imaging positions, and the bra can even be reversed to acquire images from the other breast. 

The researchers tested cUSBr-Patch on phantoms and a human subject, and compared it to a conventional ultrasound scanner. They found that cUSBr-Patch:

  • Had a field of view up to 100mm wide and an imaging depth up to 80mm
  • Achieved resolution comparable to conventional ultrasound
  • Detected cysts as small as 30mm in the human volunteer, a 71-year-old woman with a history of breast cysts
  • The same cysts were detected with the array in different positions, an important capability for long-term monitoring

The MIT researchers believe that wearable breast ultrasound could detect early-stage breast cancer, in cases such as high-risk people in between routine screening mammograms. 

The researchers ultimately hope to develop a version of the device that’s about the size of a smartphone (right now the array has to be hooked up to a conventional ultrasound scanner to view images). They also want to investigate the use of AI to analyze images.

The Takeaway

It’s still early days for wearable breast ultrasound, but the new results are an exciting development that hints of future advances to come. Wearable breast ultrasound could even have an advantage over other wearable use cases like cardiac monitoring, as it doesn’t require continuous imaging during the user’s activities. Stay tuned.

Does ‘Automation Neglect’ Limit AI’s Impact?

Radiologists ignored AI suggestions in a new study because of “automation neglect,” a phenomenon in which humans are less likely to trust algorithmic recommendations. The findings raise questions about whether AI really should be used as a collaborative tool by radiologists. 

How radiologists use AI predictions has become a growing area of research as AI moves into the clinical realm. Most use cases see radiologists employing AI in a collaborative role as a decision-making aid when reviewing cases. 

But is that really the best way to use AI? In a paper published by the National Bureau of Economic Research, researchers from Harvard Medical School and MIT explored the effectiveness of radiologist performance when assisted by AI, in particular its impact on diagnostic quality.

They ran an experiment in which they manipulated radiologist access to predictions from the CheXpert AI algorithm for 324 chest X-ray cases, and then analyzed the results. They also assessed radiologist performance with and without clinical context. The 180 radiologists participating in the study were recruited from US teleradiology firms, as well as from a health network in Vietnam. 

It was expected that AI would boost radiologist performance, but instead accuracy remained unchanged:

  • AI predictions were more accurate than two-thirds of the radiologists
  • Yet, AI assistance failed to improve the radiologists’ diagnostic accuracy, as readers underweighted AI findings by 30% compared to their own assessments
  • Radiologists took 4% longer to interpret cases when either AI or clinical context were added
  • Adding clinical context to cases had a bigger impact on radiologist performance than adding AI interpretations

The findings show automation neglect can be a “major barrier” to human-AI collaboration. Interestingly, the new article seems to run counter to a previous study finding that radiologists who received incorrect AI results were more likely to follow the algorithm’s suggestions – against their own judgment. 

The Takeaway

The authors themselves admit the new findings are “puzzling,” but they do have intriguing ramifications. In particular, the researchers suggest that there may be limitations to the collaborative model in which humans and AI work together to analyze cases. Instead, it may be more effective to assign AI exclusively to certain studies, while radiologists work without AI assistance on other cases.

AI Automates Liver Fat Detection

An automated AI algorithm that analyzes CT scans for signs of hepatic steatosis could make it possible to perform opportunistic screening for liver disease. In a study in AJR, researchers described their tool and the optimal CT parameters it needs for highest accuracy. 

Hepatic steatosis (fatty liver) is a common condition that can represent non-alcoholic fatty liver disease (NAFLD), also known as metabolic dysfunction-associated steatotic liver disease (MASLD). Imaging is the only noninvasive tool for detecting steatosis and quantifying liver fat, with CT having an advantage due to its widespread availability. 

Furthermore, abdominal CT data acquired for other clinical indications could be analyzed for signs of fatty liver – the classic definition of opportunistic screening. Patients could then be moved into treatment or intervention.

But who would read all those CT scans? Not who, but what – an AI algorithm trained to identify hepatic steatosis. To that end, researchers from the US, UK, and Israel tested an algorithm from Nanox AI that was trained to detect moderate hepatic steatosis on either non-contrast or post-contrast CT images. (Nanox AI was formed when Israeli X-ray vendor Nanox bought AI developer Zebra Medical Vision in 2021.)

The group’s study population included 2,777 patients with portal venous phase CT images acquired for different indications. AI was used to analyze the scans, and researchers noted the algorithm’s performance for detecting moderate steatosis under a variety of circumstances, such as liver attenuation in Hounsfield units (HU). 

  • The AI algorithm’s performance was higher for post-contrast liver attenuation than post-contrast liver-spleen attenuation difference (AUC=0.938 vs. 0.832)
  • Post-contrast liver attenuation at <80 HU had sensitivity for moderate steatosis of 77.8% and specificity of 93.2%
  • High specificity could be key to opportunistic screening as it enables clinicians to rule out individuals who don’t have disease without requiring diagnostic work-up that might lead to false positives

The authors point out that opportunistic screening would make abdominal CT scans more cost-effective by using them to identify additional pathology at minimal additional cost to the healthcare system. 

The Takeaway

This study represents another step forward in showing how AI can make opportunistic screening a reality. AI algorithms can comb through CT scans acquired for a variety of reasons, identifying at-risk individuals and alerting radiologists that additional work-up is needed. The only question is what’s needed to put opportunistic screening into clinical practice. 

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!