Wearable devices are all the rage in personal fitness – could wearable breast ultrasound be next? MIT researchers have developed a patch-sized wearable breast ultrasound device that’s small enough to be incorporated into a bra for early cancer detection. They described their work in a new paper in Science Advances.
This isn’t the first use of wearable ultrasound. In fact, earlier this year UCSD researchers revealed their work on a wearable cardiac ultrasound device that obtains real-time data on cardiac function.
The MIT team’s concept expands the idea into cancer detection. They took advantage of previous work on conformable piezoelectric ultrasound transducer materials to develop cUSBr-Patch, a one-dimensional phased-array probe integrated into a honeycomb-shaped patch that can be inserted into a soft fabric bra.
The array covers the entire breast surface and can acquire images from multiple angles and views using 64 elements at a 7MHz frequency. The honeycomb design means that the array can be rotated and moved into different imaging positions, and the bra can even be reversed to acquire images from the other breast.
The researchers tested cUSBr-Patch on phantoms and a human subject, and compared it to a conventional ultrasound scanner. They found that cUSBr-Patch:
- Had a field of view up to 100mm wide and an imaging depth up to 80mm
- Achieved resolution comparable to conventional ultrasound
- Detected cysts as small as 30mm in the human volunteer, a 71-year-old woman with a history of breast cysts
- The same cysts were detected with the array in different positions, an important capability for long-term monitoring
The MIT researchers believe that wearable breast ultrasound could detect early-stage breast cancer, in cases such as high-risk people in between routine screening mammograms.
The researchers ultimately hope to develop a version of the device that’s about the size of a smartphone (right now the array has to be hooked up to a conventional ultrasound scanner to view images). They also want to investigate the use of AI to analyze images.
It’s still early days for wearable breast ultrasound, but the new results are an exciting development that hints of future advances to come. Wearable breast ultrasound could even have an advantage over other wearable use cases like cardiac monitoring, as it doesn’t require continuous imaging during the user’s activities. Stay tuned.
Radiologists ignored AI suggestions in a new study because of “automation neglect,” a phenomenon in which humans are less likely to trust algorithmic recommendations. The findings raise questions about whether AI really should be used as a collaborative tool by radiologists.
How radiologists use AI predictions has become a growing area of research as AI moves into the clinical realm. Most use cases see radiologists employing AI in a collaborative role as a decision-making aid when reviewing cases.
But is that really the best way to use AI? In a paper published by the National Bureau of Economic Research, researchers from Harvard Medical School and MIT explored the effectiveness of radiologist performance when assisted by AI, in particular its impact on diagnostic quality.
They ran an experiment in which they manipulated radiologist access to predictions from the CheXpert AI algorithm for 324 chest X-ray cases, and then analyzed the results. They also assessed radiologist performance with and without clinical context. The 180 radiologists participating in the study were recruited from US teleradiology firms, as well as from a health network in Vietnam.
It was expected that AI would boost radiologist performance, but instead accuracy remained unchanged:
- AI predictions were more accurate than two-thirds of the radiologists
- Yet, AI assistance failed to improve the radiologists’ diagnostic accuracy, as readers underweighted AI findings by 30% compared to their own assessments
- Radiologists took 4% longer to interpret cases when either AI or clinical context were added
- Adding clinical context to cases had a bigger impact on radiologist performance than adding AI interpretations
The findings show automation neglect can be a “major barrier” to human-AI collaboration. Interestingly, the new article seems to run counter to a previous study finding that radiologists who received incorrect AI results were more likely to follow the algorithm’s suggestions – against their own judgment.
The authors themselves admit the new findings are “puzzling,” but they do have intriguing ramifications. In particular, the researchers suggest that there may be limitations to the collaborative model in which humans and AI work together to analyze cases. Instead, it may be more effective to assign AI exclusively to certain studies, while radiologists work without AI assistance on other cases.
An automated AI algorithm that analyzes CT scans for signs of hepatic steatosis could make it possible to perform opportunistic screening for liver disease. In a study in AJR, researchers described their tool and the optimal CT parameters it needs for highest accuracy.
Hepatic steatosis (fatty liver) is a common condition that can represent non-alcoholic fatty liver disease (NAFLD), also known as metabolic dysfunction-associated steatotic liver disease (MASLD). Imaging is the only noninvasive tool for detecting steatosis and quantifying liver fat, with CT having an advantage due to its widespread availability.
Furthermore, abdominal CT data acquired for other clinical indications could be analyzed for signs of fatty liver – the classic definition of opportunistic screening. Patients could then be moved into treatment or intervention.
But who would read all those CT scans? Not who, but what – an AI algorithm trained to identify hepatic steatosis. To that end, researchers from the US, UK, and Israel tested an algorithm from Nanox AI that was trained to detect moderate hepatic steatosis on either non-contrast or post-contrast CT images. (Nanox AI was formed when Israeli X-ray vendor Nanox bought AI developer Zebra Medical Vision in 2021.)
The group’s study population included 2,777 patients with portal venous phase CT images acquired for different indications. AI was used to analyze the scans, and researchers noted the algorithm’s performance for detecting moderate steatosis under a variety of circumstances, such as liver attenuation in Hounsfield units (HU).
- The AI algorithm’s performance was higher for post-contrast liver attenuation than post-contrast liver-spleen attenuation difference (AUC=0.938 vs. 0.832)
- Post-contrast liver attenuation at <80 HU had sensitivity for moderate steatosis of 77.8% and specificity of 93.2%
- High specificity could be key to opportunistic screening as it enables clinicians to rule out individuals who don’t have disease without requiring diagnostic work-up that might lead to false positives
The authors point out that opportunistic screening would make abdominal CT scans more cost-effective by using them to identify additional pathology at minimal additional cost to the healthcare system.
This study represents another step forward in showing how AI can make opportunistic screening a reality. AI algorithms can comb through CT scans acquired for a variety of reasons, identifying at-risk individuals and alerting radiologists that additional work-up is needed. The only question is what’s needed to put opportunistic screening into clinical practice.
In a major victory for PET advocates, CMS this week said it was opening a review of its reimbursement policy on PET scans for Alzheimer’s disease. The review could lead to more generous Medicare and Medicaid payments for PET to detect amyloid buildup in the brain, long known as a link to the debilitating – and inevitably fatal – disease.
Medicare’s current policy on PET for Alzheimer’s has been in place since 2013 and is based on its coverage with evidence (CED) framework; it restricts reimbursement to a single scan per lifetime for patients who must be participating in clinical trials. The CED policy reflects not only CMS’ cautious approach to new technology, but also the fact that for years there have been no effective treatments for Alzheimer’s disease.
That’s all changed within the last year. A new class of drugs that target amyloid buildup in the brain has begun to receive FDA approval, the most recent being Leqembi from Esai/Biogen in January 2023. And this week, Eli Lilly reported positive results for its amyloid-targeting treatment donanemab (see below), with approval expected by the end of 2023.
The new drugs have changed the game when it comes to diagnosis and treatment of Alzheimer’s disease:
- PET can now be used to identify eligible patients and monitor their treatment
- Thanks to PET, patients won’t continue to be given expensive drugs after amyloid buildup has been eliminated
- Expanded PET reimbursement could boost the use of PET diagnostic tracers for identifying amyloid buildup
CMS is taking comments on its proposal through August 16. If the agency eliminates the CED policy in favor of a national coverage decision, then decisions on PET reimbursement will be made by local Medicare Administrative Contractors (MACs).
This week’s news could be a Pyrrhic victory if PET reimbursement levels are set too low. One positive sign is that CMS has said it also plans to review its policy that bundles radiotracer payments together with scan payments, which tends to depress reimbursement.
The nuclear medicine and molecular imaging community has chafed for years under CMS’ restrictive policies on PET for Alzheimer’s disease, with groups like SNMMI lobbying for the change. This week’s news should have wide-ranging benefits not only for the PET business sector, but also for patients who are facing the scourge of Alzheimer’s disease.
In the early days of the COVID-19 pandemic in China, hospitals were performing so many lung scans of infected patients that CT scanners were crashing. That’s according to an article based on an interview with a Wuhan radiologist that provides a chilling first-hand account of radiology’s role in what’s become the biggest public health crisis of the 21st century.
The interview was originally published in 2022 by the Chinese-language investigative website Caixin and was translated and published this month by U.S. Right to Know, a public health advocacy organization.
In a sign of the information’s sensitivity, the original publication on Caixin’s website has been deleted, but U.S. Right to Know obtained the document from the US State Department under the Freedom of Information Act.
Radiologists at a Wuhan hospital noticed how COVID cases began doubling every 3-4 days in early January 2020, the article states, with many patients showing signs of ground-glass opacities on CT lung scans – a telltale sign of COVID infection. But Chinese authorities suppressed news about the rapid spread of the virus, and by January 11 the official estimate was that there were only 41 COVID cases in the entire country.
In reality, COVID cases were growing rapidly. CT machines began crashing in the fourth week of January due to overheating, said the radiologist, who estimated the number of cases in Wuhan at 10,000 by January 21. Hospitals were forced to turn infected patients away, and many people were so sick they were unable to climb onto X-ray tables for exams. Other details included:
- Chinese regulatory authorities denied that human-to-human transmission of the SARS CoV-2 virus was occurring even as healthcare workers began falling ill
- Many workers at Chinese hospitals were discouraged from wearing masks in the pandemic’s early days to maintain the charade that human-to-human contact was not possible – and many ended up contracting the virus
- Radiologists and other physicians lived in fear of retaliation if they spoke up about the virus’ rapid spread
The article provides a stunning behind-the-scenes look at the early days of a pandemic that would go on to reshape the world in 2020. What’s more, it demonstrates the vital role of radiology as a front-line service that’s key to the early identification and treatment of disease – even in the face of bureaucratic barriers to delivering quality care.
Over one-quarter of patients presenting with a first episode of psychosis had some kind of abnormality on brain MRI scans, and about 6% of all findings were clinically relevant and required a change in patient management. Writing in JAMA Psychiatry, researchers from the UK and Germany say their study suggests that MRI should be used in the clinical workup of all patients presenting with psychosis.
Psychosis caused by another medical condition – called secondary psychosis – can have causes that produce brain abnormalities visible on MRI scans. These are findings like white-matter hyperintensities that – while not themselves a form of pathology – are sometimes associated with more serious conditions like cognitive decline.
MRI scans of people experiencing their first psychotic episode could detect some of these abnormalities before subsequent episodes occur. But at present there is no consensus as to whether MRI should be used in the evaluation of patients presenting with first-episode psychosis.
In a meta-analysis, researchers wanted to investigate the prevalence of intracranial radiological abnormalities on MRI scans of patients with first-episode psychosis. They reviewed 12 independent studies that covered a total of 1,613 patients. Findings across all the studies included:
- A prevalence rate of 26.4% for all radiological abnormalities
- A prevalence rate of 5.9% for clinically relevant abnormalities
- One in 18 patients had a change in management after an MRI scan
- White-matter hyperintensities were the most common abnormality, with a prevalence of 7.9% for all abnormalities and 0.9% among clinically relevant abnormalities
Given the impact of MRI on patient management, the authors suggested that performing routine scans on people after their first psychotic episode could have both clinical and economic benefits. This could be especially true due to the financial costs of failing to identify a clinically relevant abnormality that could lead to a later episode if not treated.
These findings may break the logjam over whether MRI should be routinely used in the evaluation of patients with first-episode psychosis. The authors note that while many of the abnormalities found on MRI in the studies they reviewed did not require a change in patient management, abnormalities could be harbingers of poorer patient outcomes, even if they don’t eventually lead to a diagnosis of secondary psychosis.
If you think you’ve been seeing more non-physician practitioners (NPPs) reading medical imaging exams, you’re not alone. A new study in Current Problems in Diagnostic Radiology found that the rate of NPP interpretations went up almost 27% over four years.
US radiologists have zealously guarded their position as the primary readers of imaging exams, even as allied health professionals like nurses and physician assistants clamor to extend their scope of practice (SOP) into image interpretation. The struggle often plays out in state legislatures, with each side pushing laws benefiting their positions.
How has this dynamic affected NPP interpretation rates? In the current study, researchers looked at NPP interpretations of 110 million imaging claims from 2016 to 2020. They also examined how NPP rates changed by geographic location, and whether state laws on NPP practice authority affected rates. Findings included:
- The rate of NPP interpretation for imaging studies went from 2.6% to 3.3% in the study period – growth of 26.9%.
- Metropolitan areas saw the highest growth rate in NPP interpretation, with growth of 31.3%, compared to micropolitan areas (18.8%), while rates in rural areas did not grow at a statistically significant rate.
- Rates of NPP interpretation tended to grow more in states with less restrictive versus more restrictive practice-authority laws (45% vs. 16.6%).
- NPP interpretation was focused on radiography/fluoroscopy (53%), ultrasound (24%), and CT and MRI (21%).
The findings are particularly interesting because they run counter to one of the main arguments made by NPPs for expanding their scope of practice into imaging: to alleviate workforce shortages in rural areas. Instead, NPPs (like physicians themselves) tend to gravitate to urban areas – where their services may not be as needed.
The study also raises questions about whether the training that NPPs receive is adequate for a highly subspecialized area like medical imaging, particularly given the study’s findings that advanced imaging like CT and MRI make up one in five exams being read by NPPs.
The findings undermine one of the main arguments in favor of using non-physician practitioners – to address access-to-care issues. The question is whether the study has an impact on the ongoing turf battle between radiologists and NPPs over image interpretation playing out in state legislatures.
Can you believe the hype when it comes to marketing claims made for AI software? Not always. A new review in JAMA Network Open suggests that marketing materials for one-fifth of FDA-cleared AI applications don’t agree with the language in their regulatory submissions.
Interest in AI for healthcare has exploded, creating regulatory challenges for the FDA due to the technology’s novelty. This has left many AI developers guessing how they should comply with FDA rules, both before and after products get regulatory clearance.
This creates the possibility for discrepancies between products the FDA has cleared and how AI firms promote them. To investigate further, researchers from NYU Langone Health analyzed content from 510(k) clearance summaries and accompanying marketing materials for 119 AI- and machine learning (ML)-enabled devices cleared from November 2021 to March 2022. Their findings included:
- Overall, AI/ML marketing language was consistent with 510(k) summaries for 80.67% of devices
- Language was considered “discrepant” for 12.61% and “contentious” for 6.72%
- Most of the AI/ML devices surveyed (63.03%) were developed for radiology use; these had a slightly higher rate of consistency (82.67%) than the entire study sample
The authors provided several examples illustrating when AI/ML firms went astray. In one case labeled as “discrepant,” a developer touted the “cutting-edge AI and advanced robotics” in its software for measuring and displaying cerebral blood flow with ultrasound. But the product’s 510(k) summary never discussed AI capabilities, and the algorithm isn’t included on the FDA’s list of AI/ML-enabled devices.
In another case labeled as “contentious,” marketing materials for an ECG mapping software application mention that it includes computation modeling and is a smart device, but require users to request a pamphlet from the developer for more information.
So, can you believe the AI hype? This study shows that most of the time you can, with a consistency rate of 80.67% – not bad for a field as new as AI (a fact acknowledged in an invited commentary on the paper). But the study’s authors suggest that “any level of discrepancy is important to note for consumer safety.” And for a technology that already has trust issues, it’s probably best that developers not push the envelope when it comes to marketing.