Can you believe the hype when it comes to marketing claims made for AI software? Not always. A new review in JAMA Network Open suggests that marketing materials for one-fifth of FDA-cleared AI applications don’t agree with the language in their regulatory submissions.
Interest in AI for healthcare has exploded, creating regulatory challenges for the FDA due to the technology’s novelty. This has left many AI developers guessing how they should comply with FDA rules, both before and after products get regulatory clearance.
This creates the possibility for discrepancies between products the FDA has cleared and how AI firms promote them. To investigate further, researchers from NYU Langone Health analyzed content from 510(k) clearance summaries and accompanying marketing materials for 119 AI- and machine learning (ML)-enabled devices cleared from November 2021 to March 2022. Their findings included:
- Overall, AI/ML marketing language was consistent with 510(k) summaries for 80.67% of devices
- Language was considered “discrepant” for 12.61% and “contentious” for 6.72%
- Most of the AI/ML devices surveyed (63.03%) were developed for radiology use; these had a slightly higher rate of consistency (82.67%) than the entire study sample
The authors provided several examples illustrating when AI/ML firms went astray. In one case labeled as “discrepant,” a developer touted the “cutting-edge AI and advanced robotics” in its software for measuring and displaying cerebral blood flow with ultrasound. But the product’s 510(k) summary never discussed AI capabilities, and the algorithm isn’t included on the FDA’s list of AI/ML-enabled devices.
In another case labeled as “contentious,” marketing materials for an ECG mapping software application mention that it includes computation modeling and is a smart device, but require users to request a pamphlet from the developer for more information.
So, can you believe the AI hype? This study shows that most of the time you can, with a consistency rate of 80.67% – not bad for a field as new as AI (a fact acknowledged in an invited commentary on the paper). But the study’s authors suggest that “any level of discrepancy is important to note for consumer safety.” And for a technology that already has trust issues, it’s probably best that developers not push the envelope when it comes to marketing.