How are healthcare providers who have adopted AI really using it? A new Medscape/HIMSS survey found that most providers are using AI for administrative tasks, while medical image analysis is also one of the top AI use cases.
AI has the potential to revolutionize healthcare, but many industry observers have been frustrated with the slow pace of clinical adoption.
Implementation challenges, regulatory issues, and lack of reimbursement are among the reasons keeping more healthcare providers from embracing the technology.
But the Medscape/HIMSS survey shows some early successes for AI … as well as lingering questions.
Researchers surveyed a total of 846 people in the U.S. who were either executive or clinical leaders, practicing physicians or nurses, or IT professionals, and whose practices were already using AI in some way.
The top four tasks for which AI is being used were administrative rather than clinical, with image analysis occupying the fifth spot …
Transcribing patient notes (36%).
Transcribing business meetings (32%).
Creating routine patient communications (29%).
Performing patient record-keeping (27%).
Analyzing medical images (26%).
The survey also analyzed attitudes toward AI, finding …
57% said AI helped them be more efficient and productive.
But lower marks were given for reducing staff hours (10%) and lowering costs (31%).
AI got the highest marks for helping with transcription of business meetings (77%) and patient notes (73%), reviewing medical literature (72%), and medical image analysis (70%).
The findings track well with developments at last week’s RSNA 2024, where AI algorithms dedicated to non-clinical tasks like radiology report generation, scheduling, and operation analysis showed growing prominence.
Indeed, many AI developers have specifically targeted the non-clinical space, both because commercialization is easier (FDA authorization is not typically needed) and because doctors often say they need more help with administrative rather than clinical tasks.
The Takeaway
While it’s easy to be impatient with AI’s slow uptake, the Medscape/HIMSS survey shows that AI adoption is indeed occurring at medical practices. And while image analysis was radiology’s first AI use case, speeding up workflow and administrative tasks may end up being the technology’s most impactful application.
CHICAGO – It’s been AI all the time this week at RSNA 2024. From clinical sessions packed with the latest findings on AI’s utility to technical exhibits crowded with AI vendors, artificial intelligence and its impact on radiology was easily the hottest trend at McCormick Place.
Radiology greeted AI with initial skepticism when the first applications like IBM Watson were introduced at RSNA around a decade ago.
But the field’s attitude has been evolving to the point where AI is now being viewed as perhaps the only technology that can save the discipline from the vicious cycle of rising exam volume, falling reimbursement, and pervasive levels of burnout.
RSNA telegraphed the shift last year by announcing that Stanford University’s Curtis Langlotz, MD, PhD, would be RSNA 2024 president.
Langlotz is one of the most respected AI researchers and educators in radiology, and even coined the phrase that while AI would not replace radiologists, radiologists with AI would replace those without it.
In his president’s address, Langlotz echoed this theme, painting a picture of a future radiology in which humans and machines collaborate to deliver better patient care than either could alone.
Langlotz’s talk was followed by a presentation by another prominent AI luminary – Nina Kottler, MD, of Radiology Partners.
Kottler took on the concerns that many in radiology (and in the world at large) have about AI as a disruptive force in a field that cherishes its traditions.
She advised radiology to take a leading role in AI adoption, repeating a famous quote that the best way to predict the future is to create it yourself.
What were the other trends besides AI at RSNA 2024? They included…
Photon-counting CT, which is likely to see new market entrants in 2025.
Total-body PET, with PET scanners that have extra-long detector arrays.
Theranostics, a discipline that integrates diagnosis and therapy and promises to breathe new life into SPECT.
CT colonography and CCTA, which will see positive reimbursement changes in 2025.
Continued growth of CT lung screening, especially as a tool for opportunistic screening of other conditions.
Continued expansion of AI for breast screening.
The Takeaway
The RSNA meeting has been called radiology’s Super Bowl and World Cup all rolled into one, and this year didn’t disappoint. RSNA 2024 showed that radiology is prepared to fully embrace AI – and a future in which humans and machines collaborate to deliver better patient care.
Once an AI algorithm has been approved and moves into clinical use, how should its performance be monitored? This question was top of mind at last week’s meeting of the FDA’s new Digital Health Advisory Committee.
AI has the potential to radically reshape healthcare and help clinicians manage more patients with fewer staff and other resources.
But AI also represents a regulatory challenge because it’s constantly learning, such that after a few years an AI algorithm might be operating much differently from the version first approved by the FDA – especially with generative AI.
This conundrum was a point of discussion at last week’s DHAC meeting, which was called specifically to focus on regulation of generative AI, and could result in new rules covering all AI algorithms. (An executive summary that outlines the FDA’s thinking is available for download.)
Radiology was well-represented at DHAC, understandable given it has the lion’s share of authorized algorithms (73% of 950 devices at last count).
A half-dozen radiology AI experts gave presentations over two days, including Parminder Bhatia of GE HealthCare; Nina Kottler, MD, of Radiology Partners; Pranav Rajpurkar, PhD, of Harvard; and Keith Dreyer, DO, PhD, and Bernardo Bizzo, MD, PhD, both of Mass General Brigham and the ACR’s Data Science Institute.
Dreyer and Bizzo directly addressed the question of post-market AI surveillance, discussing ongoing efforts to track AI performance, including …
The Healthcare AI Challenge, a community for healthcare AI validation and monitoring that’s a collaboration between ACR, MGB, and several other academic institutions.
The Takeaway
Last week’s DHAC meeting offers a fascinating glimpse at the issues the FDA is wrestling with as it contemplates stronger regulation of generative AI. Fortunately, radiology has blazed a trail in setting up structures like ARCH-AI and Assess-AI to monitor AI performance, and the FDA is likely to follow the specialty’s lead as it develops a regulatory framework.
Each year approximately 2 billion chest X-rays are performed globally. They are fast, noninvasive, and a relatively inexpensive radiological examination for front-line diagnostics in outpatient, emergency, or community settings.
But beyond the simplicity of CXR lies a secret weapon in the fight against lung cancer: artificial intelligence.
Be it serendipitous screening, opportunistic detection, or incidental identification, there is potential for AI incorporated into CXR to screen patients for disease when they are getting an unrelated medical examination.
This could include the patient in the ER undergoing a CXR for suspected broken ribs after a fall, or an individual referred by their doctor for a CXR with suspected pneumonia. These people, without symptoms, may unknowingly have small yet growing pulmonary nodules.
AI can find these abnormalities and flag them to clinicians as a suspicious finding for further investigation.
This has the potential to find nodules earlier, in the very early stages of lung cancer when it is easier to biopsy or treat.
Indeed, only 5.8% of eligible ex-smoking Americans undergo CT-based lung cancer screening.
So the ability to cast the detection net wider through incidental pulmonary nodule detection has significant merits.
Early global studies into the power of AI for incidental pulmonary nodules (IPNs) shows exciting promise.
The latest evidence shows one lung cancer detected for every 1,120 CXRs has major implications to diagnose and treat people earlier – and potentially save lives.
The qXR-LN chest X-ray AI algorithm from Qure.ai is raising the bar for incidental pulmonary nodule detection. In a retrospective study performed on missed or mislabelled US CXR data, qXR-LN achieved an impressive negative predictive value of 96% and an AUC score of 0.99 for detection of pulmonary nodules.
By acting as a second pair of eyes for radiologists, qXR-LN can help detect subtle anatomical anomalies that may otherwise go unnoticed, particularly in asymptomatic patients.
The FDA-cleared solution serves as a crucial second reader, assisting in the review of chest radiographs on the frontal projection.
In another multicenter study involving 40 sites from across the U.S., the qXR-LN algorithm demonstrated an impressive AUC of 94% for scan-level nodule detection, highlighting its potential to significantly impact patient outcomes by identifying early signs of lung cancer that can be easily missed.
The Takeaway
By harnessing the power of AI for opportunistic lung cancer surveillance, healthcare providers can adopt a proactive approach to early detection, without significant new investment, and ultimately improving patient survival rates.
Qure.ai will be exhibiting at RSNA 2024, December 1-4. Visit booth #4941 for discussion, debate, and demonstrations.
The FDA has updated its list of AI- and machine learning-enabled medical devices that have received regulatory authorization. The list is a closely watched barometer of the health of the AI sector, and the update shows the FDA is keeping a brisk pace of authorizations.
The FDA has maintained double-digit growth of AI authorizations for the last several years, a pace that reflects the growing number of submissions it’s getting from AI developers.
Indeed, data compiled by regulatory expert Bradley Merrill Thompson show how the number of FDA authorizations has been growing rapidly since the dawn of the medical AI era in around 2016 (see also our article on AI safety below).
The new FDA numbers show that …
The FDA has now authorized 950 AI/ML-enabled devices since it began keeping track
Device authorizations are up 15% for the first half of 2024 compared to the same period the year before (107 vs. 93)
The pace could grow even faster in late 2024 – in 2023, FDA in the second half authorized 126 devices, up 35% over the first half
At that pace, the FDA should hit just over 250 total authorizations in 2024
This would represent 14% growth over 220 authorizations in 2023, and compares to growth of 14% in 2022 and 15% in 2021
As with past updates, radiology makes up the lion’s share of AI/ML authorizations, but had a 73% share in the first half, down from 80% for all of 2023
Siemens Healthineers led in all H1 2024 clearances with 11, bringing its total to 70 (66 for Siemens and four for Varian). GE HealthCare remains the leader with 80 total clearances after adding three in H1 2024 (GE’s total includes companies it has acquired, like Caption Health and MIM Software). There’s a big drop off after GE and Siemens, including Canon Medical (30), Aidoc (24), and Philips (24).
The FDA’s list includes both software-only algorithms as well as hardware devices like scanners that have built-in AI capabilities, such as a mobile X-ray unit that can alert users to emergent conditions.
Indeed, many of the authorizations on the FDA’s list are for updated versions of already-cleared products rather than brand-new solutions – a trend that tends to inflate radiology’s share of approvals.
The Takeaway
The new FDA numbers on AI/ML regulatory authorizations are significant not only for revealing the growth in approvals, but also because the agency appears to be releasing the updates more frequently – perhaps a sign it is practicing what it preaches when it comes to AI openness and transparency.
An AI algorithm that examined teleradiology studies for signs of intracranial hemorrhage had mixed performance in a new study in Radiology: Artificial Intelligence. AI helped detect ICH cases that might have been missed, but false positives slowed radiologists down.
AI is being touted as a tool that can detect unseen pathology and speed up the workflow of radiologists facing an environment of limited resources and growing image volume.
This dynamic is particularly evident at teleradiology practices, which frequently see high volumes during off-hour shifts; indeed, a recent study found that telerad cases had higher rates of patient death and more malpractice claims than cases read by traditional radiology practices.
So teleradiologists could use a bit more help. In the new study, researchers from the VA’s National Teleradiology Program assessed Avicenna.ai’s CINA v1.0 algorithm for detecting ICH on STAT non-contrast head CT studies.
AI was used to analyze 58.3k CT exams processed by the teleradiology service from January 2023 to February 2024, with a 2.7% prevalence of ICH.
Results were as follows…
AI flagged 5.7k studies as positive for acute ICH and 52.7k as negative
Final radiology reports confirmed that 1.2k exams were true positives for a sensitivity of 76% and a positive predictive value of 21%
There were 384 false negatives (missed ICH cases), for a specificity of 92% and a negative predictive value of 99.3%
The algorithm’s performance at the VA was a bit lower than in previously published literature
Cases that the algorithm falsely flagged as positive took over a minute longer to interpret than prior to AI deployment
Overall, case interpretation times were slightly lower after AI than before
One issue to note is that the CINA algorithm is not intended for small hemorrhages with volumes < 3 mL; the researchers did not exclude these cases from their analysis, which could have reduced its performance.
Also, at 2.7% the VA’s teleradiology program ICH prevalence was lower than the 10% prevalence Avicenna has used to rate its performance.
The Takeaway
The new findings aren’t exactly a slam dunk for AI in the teleradiology setting, but in terms of real-world results they are exactly what’s needed to assess the true value of the technology compared to outcomes in more tightly controlled environments.
In one of the most famous quotes about radiology and artificial intelligence, Curtis Langlotz, MD, PhD, once said that AI will not replace radiologists, but radiologists with AI will replace those without it. A new study in AJRillustrates his point, showing that radiologists using a commercially available AI algorithm had higher rates of detecting incidental pulmonary embolism on CT scans.
AI is being applied to many clinical use cases in radiology, but one of the more promising is for detecting and triaging emergent conditions that might have escaped the radiologist’s attention on initial interpretations.
Pulmonary embolism is one such condition. PE can be life-threatening and occurs in 1.3-2.6% of routine contrast-enhanced CT exams, but radiologist miss rates range from 10-75% depending on patient population.
AI can help by automatically analyzing CT scans and alerting radiologists to PEs when they can be treated quickly; the FDA has authorized several algorithms for this clinical use.
In the new paper, researchers conducted a prospective real-world study of Aidoc’s BriefCase for iPE Triage at the University of Alabama at Birmingham.
Researchers tracked rates of PE detection in 4.3k patients before and after AI implementation in 2021, finding …
Radiologists saw their sensitivity for PE detection go up after AI implementation (80% vs. 96%)
Specificity was unchanged (99.1% vs. 99.9%, p=0.58)
The PE incidence rate went up (1.4% vs. 1.6%)
There was no statistically significant difference in report turnaround time before and after AI (65 vs. 78 minutes, p=0.26)
The study echoes findings from 2023, when researchers from UT Southwestern also used the Aidoc algorithm for PE detection, in that case finding that AI cut times for report turnaround and patient waits.
The Takeaway
While studies showing AI’s value to radiologists are commonplace, many of them are performed under controlled conditions that don’t translate to the real world. The current study is significant because it shows that with AI, radiologists can achieve near-perfect detection of a potentially life-threatening condition without a negative impact on workflow.
Echocardiography is a pillar of cardiac imaging, but it is operator-dependent and time-consuming to perform. In this interview, The Imaging Wire spoke with Seth Koeppel, Head of Business Development, and José Rivero, MD, RCS, of echo AI developer Us2.ai about how the company’s new V2 software moves the field toward fully automated echocardiography.
The Imaging Wire: Can you give a little bit of background about Us2.ai and its solutions for automated echocardiography?
Seth Koeppel: Us2.ai is a company that originated in Singapore. The first version of the software (Us2.V1) received its FDA clearance a little over two years ago for an AI algorithm that automates the analysis and reporting on echocardiograms of 23 key measurements for the evaluation of diastolic and systolic function.
In April 2024 we received an expanded regulatory clearance for more measurements – now a total of 45 measurements are cleared. When including derived measurements, based on those core 45 measurements, now up to almost 60 measurements are fully validated and automated, and with that Us2.V2 is bordering on full automation for echocardiography.
The application is vendor-agnostic – we basically can ingest any DICOM image and in two to three minutes produce a full report and analysis.
The software replicates what the expert human does during the traditional 45-60 minutes of image acquisition and annotation in echocardiography. Typically, echocardiography involves acquiring images and video at 40 to 60 frames per second, resulting in some cases up to 100 individual images from a two- or three-second loop.
The human expert then scrolls through these images to identify the best end-diastolic and end-systolic frames, manually annotating and measuring them, which is time-consuming and requires hundreds of mouse clicks. This process is very operator-dependent and manual.
And so the advantage the AI has is that it will do all of that in a fraction of the time, it will annotate every image of every frame, producing more data, and it does it with zero variability.
The Imaging Wire: AI is being developed for a lot of different medical imaging applications, but it seems like it’s particularly important for echocardiography. Why would you say that is?
José Rivero: It’s well known that healthcare institutions and providers are dealing with a larger number of patients and more complex cases. Echo is basically a pillar of cardiac imaging and really touches every patient throughout the path of care. We bring efficiency to the workflow and clinical support for diagnosis and treatment and follow-ups, directly contributing to enhanced patient care.
Additionally, the variability is a huge challenge in echo, as it is operator-dependent. Much of what we see in echo is subjective, certain patient populations require follow-up imaging, and for such longitudinal follow-up exams you want to remove the inter-operator variability as much as possible.
Seth Koeppel: Echo is ripe for disruption. We are faced with a huge shortage of cardiac sonographers. If you simply go on Indeed.com and you type in “cardiac sonographer,” there’s over 4,000 positions open today in the US. Most of those have somewhere between a $10,000, $15,000, up to $20,000 signing bonus. It is an acute problem.
We’re very quickly approaching a situation where we’re running huge backlogs – months in some situations – to get just a baseline echo. The gold standard for diagnosis is an echocardiogram. And if you can’t perform them, you have patients who are going by the wayside.
In our current system today, the average tech will do about eight echoes a day. An echo takes 45 to 60 minutes, because it’s so manual and it relies on expert humans. For the past 35 years echo has looked the same, there has been no innovation, other than image quality has gotten better, but at same time more parameters were added, resulting in more things to analyze in that same 45 or 60 minutes.
This is the first time that we can think about doing echo in less than 45 to 60 minutes, which is a huge enhancement in throughput because it addresses both that shortage of cardiac sonographers and the increasing demand for echo exams.
It also represents a huge benefit to sonographers, who often suffer repetitive stress injuries due to the poor ergonomics of echo, holding the probe tightly pressed against the patient’s chest in one hand, and the other hand on the cart scrolling/clicking/measuring, etc., which results in a high incidence of repetitive stress injuries to neck, shoulder, wrists, etc.
Studies have shown that 20-30% of techs leave the field due to work-related injury. If the AI can take on the role of making the majority of the measurements, in essence turning the sonographer into more of an “editor” than a “doer,” it has the potential to significantly reduce injury.
Interestingly, we saw many facilities move to “off-cart” measurements during COVID to reduce the time the tech was exposed to the patient, and many realized the benefits and maintained this workflow, which we also see in pediatrics, as kids have a hard time lying on the table for 45 minutes.
So with the introduction of AI in the echo workflow, the technicians acquire the images in 15/20 minutes and, in real-time, the images processed via the AI software are all automatically labeled, annotated, and measured. Within 2-3 minutes, a full report is available for the tech to review, adjust (our measures are fully editable) and confirm, and sign off on the report.
You can immediately see the benefits of reducing the time the tech has the probe in their hand and the patient spends on the table, and the tech then gets to sit at an ergonomically correct workstation (proper keyboard, mouse, large monitors, chair, etc.) and do their reporting versus on-cart, which is where the injuries occur.
It’s a worldwide shortage, it’s not just here in the US, we see this in other parts of the world, waitlist times to get an echo could be eight, 10, 12, or more months, which is just not acceptable.
The OPERA study in the UK demonstrated that the introduction of AI echo can tackle this issue. In Glasgow, the wait time for an echo was reduced from 12 months to under six weeks.
The Imaging Wire: You just received clearance for V2, but your V1 has been in the clinical field for some time already. Can you tell us more about the feedback on the use of V1 by your customers.
José Rivero: Clinically, the focus of V1 was heart failure and pulmonary hypertension. This is a critical step, because with AI, we could rapidly identify patients with heart failure or pulmonary hypertension.
One big step that has been taken by having the AI hand-in-hand with the mobile device is that you are taking echocardiography out of the hospital. So you can just go everywhere with this technology.
We demonstrated the feasibility of new clinical pathways using AI echo out of the hospital, in clinics or primary care settings, including novice screening1, 2 (no previous experience in echocardiography but supported by point-of-care ultrasound including AI guidance and Us2.ai analysis and reporting).
Seth Koeppel: We’re addressing the efficiency problem. Most people are pegging the time savings for the tech on the overall echo somewhere around 15 to 20 minutes, which is significant. In a recent study done in Japan using the Us2.ai software by a cardiologist published in the Journal of Echocardiography, they had a 70% reduction in overall time for analysis and reporting.3
The Imaging Wire: Let’s talk about version 2 of the software. When you started working on V2, what were some of the issues that you wanted to address with that?
Seth Koeppel: Version 1, version 2, it’s never changed for us, it’s about full automation of all echo. We aim to automate all the time-consuming and repetitive tasks the human has to do – image labeling and annotation, the clicks, measurements, and the analysis required.
Our medical affairs team works closely with the AI team and the feedback from our users to set the roadmap for the development of our software, prioritizing developments to meet clinical needs and expectations. In V2, we are now covering valve measurements and further enhancing our performance on HFpEF, as demonstrated now in comparison to the gold standard, pulmonary capillary wedge pressure (PCWP)4.
A new version is really about collaborating with leading institutions and researchers, acquiring excellent datasets for training the models until they reach a level of performance producing robust results we can all be confident in. Beyond the software development and training, we also engage in validation studies to further confirm the scientific efficiency of these models.
With V2 we’re also moving now into introducing different protocols, for example, contrast-enhanced imaging, which in the US is significant. We see in some clinics upwards of 50% to 60% use of contrast-enhanced imaging, where we don’t see that in other parts of the world. Our software is now validated for use with ultrasound-enhancing agents, and the measures correlate well.
Stress echo is another big application in echocardiography. So we’ve added that into the package now, and we’re starting to get into disease detection or disease prediction.
As well as for cardiac amyloidosis (CA), V2 is aligned with guidelines-based measurements for identification of CA in patients, reporting such measurements when found, along with the actual guideline recommendations to support the identification of such conditions which could otherwise be missed
José Rivero: We are at a point where we are now able to really go into more depth into the clinical environment, going into the echo lab itself, to where everything is done and where the higher volumes are. Before we had 23 measurements, now we are up to 45.
And again, that can be even a screening tool. If we start thinking about even subdividing things that we do in echocardiography with AI, again, this is expanding to the mobile environment. So there’s a lot of different disease-based assessments that we do. We are now a more complete AI echocardiography assessment tool.
The Imaging Wire: Clinical guidelines are so important in cardiac imaging and in echocardiography. Us2.ai integrates and refers to guideline recommendations in its reporting. Can you talk about the importance of that, and how you incorporate this in the software?
José Rivero: Clinical guidelines play a crucial role in imaging for supporting standardized, evidence-based practice, as well as minimizing risks and improving quality for the diagnosis and treatment of patients. These are issued by experts, and adherence to guidelines is an important topic for quality of care and GDMT (guideline-directed medical therapies).
We are a scientifically driven company, so we recognize that international guidelines and recommendations are of utmost importance; hence, the guidelines indications are systematically visible and discrepant values found in measurements clearly highlighted.
Seth Koeppel: The beautiful thing about AI in echo is that echo is so structured that it just lends itself so perfectly to AI. If we can automate the measurements, and then we can run them through all the complicated matrices of guidelines, it’s just full automation, right? It’s the ability to produce a full echo report without any human intervention required, and to do it in a fraction of the time with zero variability and in full consideration for international recommendations.
José Rivero: This is another level of support we provide, the sonographer only has to focus on the image acquisition, the cardiologist doing the overreading and checking the data will have these references brought up to his/her attention
With echo you need to include every point in the workflow for the sonographer to really focus on image acquisition and the cardiologist to do the overreading and checking the data. But in the end, those two come together when the cardiologist and the sonographers realize that there’s efficiency on both ends.
The Imaging Wire: V2 has only been out for a short time now but has there been research published on use of V2 in the field and what are clinicians finding?
Seth Koeppel: In V1, our software included a section labeled “investigational,” and some AI measurements were accessible for research purposes only as they had not yet received FDA clearance.
Opening access to these as investigational-research-only has enabled the users to test these out and confirm performance of the AI measurements in independently led publications and abstracts. This is why you are already seeing these studies out … and it is wonderful to see the interest of the users to publish on AI echo, a “trust and verify” approach.
With V2 and the FDA clearance, these measurements, our new features and functionalities, are available for clinical use.
The Imaging Wire: What about the economics of echo AI?
Seth Koeppel: Reimbursement is still front and center in echo and people don’t realize how robust it is, partially due to it being so manual and time consuming. Hospital echo still reimburses nearly $500 under HOPPS (Hospital Outpatient Prospective Payment System). Where compared to a CT today you might get $140 global, MRI $300-$350, an echo still pays $500.
When you think about the dynamic, it still relies on an expert human that makes typically $100,000 plus a year with benefits or more. And it takes 45 to 60 minutes. So the economics are such that the reimbursement is held very high.
But imagine if you can do incrementally two or three more echoes per day with the assistance of AI, you can immediately see the ROI for this. If you can simply do two incremental echoes a day, and there’s 254 days in a working year, that’s an incremental 500 echoes.
If there’s 2,080 hours in a year, and we average about an echo every hour, most places are producing about 2,000 echoes, now you’re taking them to 2,500 or more at $500, that’s an additional $100k per tech. Many hospitals have 8-10 techs scanning in any given day, so it’s a really compelling ROI.
This is an AI that really has both a clinical benefit but also a huge ROI. There’s this whole debate out there about who pays for AI and how does it get paid for? This one’s a no brainer.
The Imaging Wire: If you could step back and take a holistic view of V2, what benefits do you think that your software has for patients as well as hospitals and healthcare systems?
Seth Koeppel: It goes back to just the inefficiencies of echo – you’re taking something that is highly manual, relies on expert humans that are in short supply. It’s as if you’re an expert craftsman, and you’ve been cutting by hand with a hand tool, and then somebody walks in and hands you a power tool. We still need the expert human, who knows where to cut, what to cut, how to cut. But now somebody has given him a tool that allows him to just do this job so much more efficiently, with a higher degree of accuracy.
Let’s take another example. Strain is something that has been particularly difficult for operators because every vendor, every cart manufacturer, has their own proprietary strain. You can’t compare strain results done on a GE cart to a Philips cart to a Siemens cart. It takes time, you have to train the operators, you have human variability in there.
In V2, strain is now included, it’s fully automated, and it’s vendor-neutral. You don’t have to buy expensive upgrades to carts to get access to it. So many, many problems are solved just in that one simple set of parameters.
If we put it all together and look at the potential of AI echo, we can address the backlog, allow for more echo to be done in the echo lab but also in primary care settings and clinics where AI echo opens new pathways for screening and detection of heart failure and heart disease at an early stage, early detection for more efficient treatment.
This helps facilities facing the increasing demand for echo support and creates efficient longitudinal follow-up for oncology patients or populations at risk.
In addition, we can open access to echo exams in parts of the world which do not have the expensive carts nor the expert workforce available and deliver on our mission to democratize echocardiography.
José Rivero: I would say that V2 is a very strong release, which includes contrast, stress echo, and strain. I would love to see all three, including all whatever we had on V1, to be mainstream, and see the customer satisfaction with this because I think that it does bring a big solution to the echo world.
The Imaging Wire: As the year progresses, what else can we look forward to seeing from Us2.ai?
José Rivero: In the clinical area, we will continue our work to expand the range of measurements and validate our detection models, but we are also very keen to start looking into pediatric echo.
Seth Koeppel: Our user interface has been greatly improved in V2 and this is something we really want to keep focus on. We are also working on refining our automated reporting to include customization features, perfecting the report output to further support the clinicians reviewing these, and integrating LLM models to make reporting accessible for non-experts HCP and the patients themselves.
REFERENCES
Tromp, J., Sarra, C., Bouchahda Nidhal, Ben Messaoud Mejdi, Fourat Zouari, Hummel, Y., Khadija Mzoughi, Sondes Kraiem, Wafa Fehri, Habib Gamra, Lam, C. S. P., Alexandre Mebazaa, & Faouzi Addad. (2023). Nurse-led home-based detection of cardiac dysfunction by ultrasound: Results of the CUMIN pilot study. European Heart Journal. Digital Health.
Huang, W., Lee, A., Tromp, J., Loon Yee Teo, Chandramouli, C., Choon Ta Ng, Huang, F., Carolyn S.P. Lam, & See Hooi Ewe. (2023). Point-of-care AI-assisted echocardiography for screening of heart failure (HANES-HF). Journal of the American College of Cardiology, 81(8), 2145–2145.
Hirata, Y., Nomura, Y., Yoshihito Saijo, Sata, M., & Kusunose, K. (2024). Reducing echocardiographic examination time through routine use of fully automated software: a comparative study of measurement and report creation time. Journal of Echocardiography.
Hidenori Yaku, Komtebedde, J., Silvestry, F. E., & Sanjiv Jayendra Shah. (2024). Deep learning-based automated measurements of echocardiographic estimators invasive pulmonary capillary wedge pressure perform equally to core lab measurements: results from REDUCE LAP-HF II. Journal of the American College of Cardiology, 83(13), 316–316.
Is radiology’s AI edge fading, at least when it comes to its share of AI-enabled medical devices being granted regulatory authorization by the FDA? The latest year-to-date figures from the agency suggest that radiology’s AI dominance could be declining.
Radiology was one of the first medical specialties to go digital, and software developers have targeted the field for AI applications like image analysis and data reconstruction.
Indeed, FDA data from recent years shows that radiology makes up the vast majority of agency authorizations for AI- and machine learning-enabled medical devices, ranging from 86% in 2020 and 2022 to 79% in 2023.
But in the new data, radiology devices made up only 73% of authorizations from January-March 2024. Other data points indicate that the FDA …
Authorized 151 new devices since August 2023
Reclassified as AI/ML-enabled 40 devices that were previously authorized
Authorized a total of 882 devices since it began tracking the field
In an interesting wrinkle, many of the devices on the updated list are big-iron scanners that the FDA has decided to classify as AI/ML-enabled devices.
These include CT and MRI scanners from Siemens Healthineers, ultrasound scanners from Philips and Canon Medical Systems, an MRI scanner from United Imaging, and the recently launched Butterfly iQ3 POCUS scanner.
The additions could be a sign that imaging OEMs increasingly are baking AI functionality into their products at a basic level, blurring the line between hardware and software.
The Takeaway
It should be no cause for panic that radiology’s share of AI/ML authorizations is declining as other medical specialties catch up to the discipline’s head start. The good news is that the FDA’s latest figures show how AI is becoming an integral part of medicine, in ways that clinicians may not even notice.
AI has shown in research studies it can help radiologists interpret breast screening exams, but for routine clinical use many questions remain about the optimal AI parameters to catch the most cancers while generating the fewest callbacks. Fortunately, a massive new study out of Norway in Radiology: Artificial Intelligence provides some guidance.
Recent researchsuch as the MASAI trial has already demonstrated that AI can help reduce the number of screening mammograms radiologists have to review, and for many low-risk cases eliminate the need for double-reading, which is commonplace in Europe.
But growing interest in breast screening AI is tempered by the field’s experience with computer-aided detection, which was introduced over 20 years ago but generated many false alarms that slowed radiologists down.
Fast forward to 2024. The new generation of breast AI algorithms seems to have addressed CAD’s shortcomings, but it’s still not clear exactly how they can best be used.
Researchers from Norway’s national breast screening program tested one mammography AI tool – Lunit’s Insight MMG – in a study with data obtained from 662k women screened with 2D mammography from 2004 to 2018.
Researchers tested AI with a variety of specificity and sensitivity settings based on AI risk scores; in one scenario, 50% of the highest risk scores were classified as positive for cancer, while in another that threshold was set to 10%. The group found …
At the 50% cutoff, AI would correctly identify 99% of screen-detected cancers and 85% of interval cancers.
At the 10% cutoff, AI would detect 92% of screen-detected cancers and 45% of interval cancers
AI understandably performed better in identifying false-positive cases as negative at the 10% threshold than 50% (69% vs. 17%)
AI had a higher AUC than double-reading for screen-detected cancers (0.97 vs. 0.88)
How generalizable is the study? It’s worth noting that the research relied on AI of 2D mammography, which is prevalent in Europe (most mammography in the US employs DBT). In fact, Lunit is targeting the US with its recently cleared Insight DBT algorithm rather than Insight MMG.
The Takeaway
As with MASAI, the new study offers an excitinglook at AI’s potential for breast screening. Ultimately, it may turn out that there’s no single sensitivity and specificity threshold at which mammography AI should be set; instead, each breast imaging facility might choose the parameters they feel best suit the characteristics of their radiologists and patient population.
Get every issue of The Imaging Wire, delivered right to your inbox.