Unlocking Body Composition Insights with Voronoi Health Analytics

Body composition plays a pivotal role in monitoring organ and tissue health and predicting treatment outcomes. Accurate changes in body composition metrics can indicate reduced muscle quantity and quality – a sign of sarcopenia – as well as altered fat distribution in organs such as the liver in metabolic diseases, epicardial and paracardial fats in cardiovascular health, and more.

However, manual segmentation is time-consuming and labor-intensive. 

  • Voronoi Health Analytics eliminates this bottleneck by combining cutting-edge AI with efficient visualization tools, automating the extraction of body composition metrics from CT and MRI scans. The company’s solutions transform imaging data into actionable insights, improving patient outcomes.

Voronoi Health Analytics provides innovative, intuitive AI tools that enable clinicians and researchers to extract quantitative body composition measurements rapidly and with high accuracy – no programming required. 

  • The company’s platforms are trusted by over 175 research labs across 25 countries, with numerous publications validating their accuracy and impact on clinical care and medical research.

Voronoi has two flagship solutions …

  • DAFS: A comprehensive 3D segmentation platform for analyzing multiple tissues, organs, lesions, and vasculature across CT and PET/CT imaging. DAFS also overlays CT segmentations onto PET scans, enabling rapid, high-accuracy assessments of PET tracer uptake in organs, tissues, and lesions.
  • DAFS Express: Optimized for single-slice body composition analysis from CT and MRI scans, this tool delivers precise measurements of skeletal muscle, visceral fat, intermuscular fat, and subcutaneous fat in seconds, making it ideal for high-throughput clinical settings.

Accurate body composition analysis is critical for staging body habitus, detecting onset of signatures of adverse health such as metabolic or cardiovascular disorders, evaluating disease progression, and monitoring organ and tissue health as a function of disease and intervention. Voronoi’s platforms address key challenges such as …

  • Reducing Workloads: Automate routine segmentation tasks and allow clinicians to focus on complex cases.
  • Improving Precision: Deliver consistent, reproducible results across patients and studies.
  • Advancing Care: Provide predictive insights that help optimize treatment strategies.

DAFS and DAFS Express seamlessly integrate into existing imaging workflows, enhancing efficiency without disrupting operations.

Body composition analysis goes beyond measuring muscle and fat. It quantifies all organs and tissues, creating data that drives predictive models. 

  • Voronoi’s vision is to empower healthcare professionals with tools that simplify complexity, support proactive care, and enhance patient outcomes.

Discover how Voronoi Health Analytics is revolutionizing body composition analysis. Visit the company’s website to request a demo and elevate your workflow today.

Time to Embrace X-Ray AI for Early Lung Cancer Detection

Each year approximately 2 billion chest X-rays are performed globally. They are fast, noninvasive, and a relatively inexpensive radiological examination for front-line diagnostics in outpatient, emergency, or community settings. 

  • But beyond the simplicity of CXR lies a secret weapon in the fight against lung cancer: artificial intelligence. 

Be it serendipitous screening, opportunistic detection, or incidental identification, there is potential for AI incorporated into CXR to screen patients for disease when they are getting an unrelated medical examination. 

  • This could include the patient in the ER undergoing a CXR for suspected broken ribs after a fall, or an individual referred by their doctor for a CXR with suspected pneumonia. These people, without symptoms, may unknowingly have small yet growing pulmonary nodules. 

AI can find these abnormalities and flag them to clinicians as a suspicious finding for further investigation. 

  • This has the potential to find nodules earlier, in the very early stages of lung cancer when it is easier to biopsy or treat. 

Indeed, only 5.8% of eligible ex-smoking Americans undergo CT-based lung cancer screening. 

  • So the ability to cast the detection net wider through incidental pulmonary nodule detection has significant merits. 

Early global studies into the power of AI for incidental pulmonary nodules (IPNs) shows exciting promise.

  • The latest evidence shows one lung cancer detected for every 1,120 CXRs has major implications to diagnose and treat people earlier – and potentially save lives. 

The qXR-LN chest X-ray AI algorithm from Qure.ai is raising the bar for incidental pulmonary nodule detection. In a retrospective study performed on missed or mislabelled US CXR data, qXR-LN achieved an impressive negative predictive value of 96% and an AUC score of 0.99 for detection of pulmonary nodules. 

  • By acting as a second pair of eyes for radiologists, qXR-LN can help detect subtle anatomical anomalies that may otherwise go unnoticed, particularly in asymptomatic patients.

The FDA-cleared solution serves as a crucial second reader, assisting in the review of chest radiographs on the frontal projection. 

  • In another multicenter study involving 40 sites from across the U.S., the qXR-LN algorithm demonstrated an impressive AUC of 94% for scan-level nodule detection, highlighting its potential to significantly impact patient outcomes by identifying early signs of lung cancer that can be easily missed. 

The Takeaway 

By harnessing the power of AI for opportunistic lung cancer surveillance, healthcare providers can adopt a proactive approach to early detection, without significant new investment, and ultimately improving patient survival rates.

Qure.ai will be exhibiting at RSNA 2024, December 1-4. Visit booth #4941 for discussion, debate, and demonstrations.

Sources

AI-based radiodiagnosis using Chest X-rays: A review. Big Data Analytics for Social Impact, Volume 6 – 2023

Results from a feasibility study for integrated TB & lung cancer screening in Vietnam, Abstract presentation UNION CONF 2024: 2560   

Performance of a Chest Radiography AI Algorithm for Detection of Missed or Mislabelled Findings: A Multicenter Study. Diagnostics 12, no. 9 (2022): 2086

Qure.ai. Qure.ai’s AI-Driven Chest X-ray Solution Receives FDA Clearance for Enhanced Lung Nodule Detection. Qure.ai, January 7, 2024

Using AI-Powered Automation to Help Solve Today’s Radiology Crisis

Reimbursement cuts. Radiologist and staff shortages. Rising costs. Surging imaging volumes. Overwhelming staff workloads. Shrinking margins. 

Sound familiar?

Radiology departments, imaging centers, and radiology practices are facing a perfect storm of challenges to deliver high-quality patient care while remaining profitable and competitive. 

  • This familiar narrative emphasizes the need for change and to embrace automation, AI, and technology solutions that automate routine tasks. 

RADIN Health has developed an innovative, cloud-based (SaaS), all-in-one technology stack based on the firsthand experience of radiologist Alejandro Bugnone, MD, CEO and medical director of Total Medical Imaging (TMI), a teleradiology group that reads for imaging centers and hospital systems nationally.  

  • Dr. Bugnone and his team of radiologists were similarly suffering from supply and demand imbalance, reimbursement cuts, increasing study volumes, and customer pressures to maintain their margin. 

As a software developer and seasoned radiologist, Dr. Bugnone was equally frustrated by the lack of a comprehensive, end-to-end technology solution in the market to address these same issues for his teleradiology practice.  

  • In evaluating numerous RIS, PACS, AI voice recognition, and workflow management solutions, his team found that each required expensive interfaces, separate company fees, and ongoing support, yet as an ecosystem still did not deliver a seamless experience that would provide a return on investment. 

An alternative is a system based on straight-through processing, a concept first pioneered in the financial services industry in which automation electronically processes transactions without manual intervention. 

“I knew there had to be a better way forward. I founded RADIN Health for healthcare and teleradiology practices [like TMI], imaging centers, and radiology departments based on straight-through processing, similar to how Wall Street sped up financial transactions without any human intervention,” Dr. Bugnone said. 

RADIN Health is a cloud-based platform that combines RIS, PACS, dictation AI, and workflow management into an all-in-one software solution. 

  • It leverages artificial intelligence, machine learning, OCR/AI, natural language processing (NLP), and other intellectual property.

Dr. Bugnone said TMI has achieved remarkable efficiencies with RADIN. 

“Our results at TMI have been staggering since implementing RADIN over the past 18 months for our complex teleradiology practice,” Dr. Bugnone noted. “With RADIN DICTATION AI, our radiologists have increased their productivity and efficiency, reducing dictation times 30% to 50%.” 

By adding RADIN SELECT, TMI reduced its SLAs more than 50% and FTEs by 70% for managing operational workflow tasks, all while adding 35% in study volumes.  

  • RADIN’s all-in-one technology solution has enabled Total Medical Imaging to meet the challenges of the radiology crisis without hiring new personnel – simply by unlocking the efficiency of their existing staff. 

“We have enjoyed significant growth in 2024 without the need to hire additional staff,” Dr. Bugnone concluded.

Watch the video below to see how RADIN’s all-in-one solution can help your practice.

Reduce the Mess, Reduce the Stress: Automating and Accelerating Efficiency in Complex Medical Imaging Environments

Repetitive, arduous tasks are a major contributor to burnout – an increasingly prevalent issue in healthcare. While digital innovation is transformative, introducing more technology to workflows often creates additional layers of complexity, hindering efficiency, performance monitoring, and ultimately the quality of care.

As a result, once-simple traditional workflows have grown cumbersome over time, filled with many interconnected tasks that are difficult to manage. 

  • As these processes become more complex, it’s clear that healthcare needs to reduce, subtract, and simplify to maintain high standards of care.

Every traditional (or macro) workflow consists of multiple smaller tasks or steps (micro-workflows), many of which are still performed manually. 

  • Consider a wound care scenario where a practitioner takes images, searches for the patient’s record in the EHR, uploads the images, and manually enters encounter details. 

While each individual task may seem small, when multiplied by dozens of similar interactions each day, these repetitive steps …

  • Decrease the time providers have for meaningful patient interactions.
  • Lower overall productivity.
  • Increase the potential for human error.
  • Contribute to burnout and fatigue.

Micro-workflows address this by breaking down processes into discrete, manageable steps. For example, by …

  • Identifying the patient within the EHR.
  • Capturing the image.
  • Automatically inputting relevant metadata.
  • Seamlessly sharing the image with the care team.

This granular approach enables automation, allowing individual components to be optimized or modified without disrupting the entire process. 

  • Micro-workflows offer adaptability, efficiency, and responsiveness, meeting evolving clinical requirements while reducing complexity.

Moreover, micro-workflows make it possible to monitor individual tasks with precision. 

  • This approach allows healthcare organizations to pinpoint workflow gaps, troubleshoot issues, and resolve performance bottlenecks. 
  • In multi-vendor environments, where integrating various systems and applications can be a challenge, the ability to streamline processes and automate tasks becomes especially valuable.

Strings by Paragon is a platform specifically designed to help healthcare organizations harness the power of micro-workflows. 

  • By breaking traditional workflows into smaller, more manageable steps, Strings enables automation, real-time performance tracking, and monitoring across a wide range of applications and infrastructure. 

The platform’s single-pane-of-glass interface provides visibility into complex, multi-vendor environments.

  • Strings offers actionable insights and automated optimizations tailored to specific clinical workflows.

With Strings, organizations can proactively identify workflow bottlenecks, implement targeted optimizations, and measure performance and ROI with precision – leading to improved efficiency, enhanced imaging quality, better patient outcomes, and a value-driven approach to care.

Learn more about Strings by visiting Paragon Health IT’s website, or visit them at RSNA 2024 at booth #1849.

Optimizing Front Office Operations through Integrated Apps and Cloud-Based RIS/PACS

Paradox of High Patient Volumes

At first glance, it may appear having more patients should naturally lead to higher revenue. When you consider extra labor costs and the fact that reimbursements are decreasing, increased volume can turn into diminishing returns.

  • Basically, the cost of adding more staff can end up being higher than the value of additional patient volumes.

Optimal management of growing patient volumes requires a new way of working with automation and cloud-based apps that replace the heavy burden of manual processes.

  • By using technology to eliminate processes, medical facilities manage patient loads better without the need for more labor costs. 

This proactive approach not only improves efficiencies but also lets front office staff focus on patient needs instead of getting bogged down with administrative tasks. 

  • Ultimately, shifting towards automation and consolidation of tasks is key to maintaining clinic profitability and keeping high standards of care, especially with increasing medical demands.

How RamSoft Can Help Simplify Front Office Operations 

Achieving workflow excellence starts with a single sign-on into a unified RIS/PACS and providing access to complementary medical imaging apps via a single worklist in the cloud. 

  • By leveraging cloud applications with scalability across facilities, organizations can “build as they grow,” while maintaining control and flexibility.

RamSoft PowerServer and OmegaAI RIS/PACS platforms reduce administrative burdens and costs associated with manual processes. Here’s how…

  • BlumePatient Portal: Patient access to diagnostic images and reports, imaging sharing with referring clinicians and family, self-scheduling, intake forms, and appointment notifications. These self-service features decrease the number of phone calls, the time needed for patient registration, and the manual process of intake form completion and filing. 
  • pVerify: Batch verification and real-time eligibility (authorization available soon) eliminates the need to call multiple insurance providers, freeing up staff time while reducing denials. 
  • PracticeSuite: An embedded solution including workflow options to accommodate entries from the RIS/PACS worklist or within the billing module. Quickly accesses top billing functions, Payment Ledger for balances and eligibility, and Payment Entry to add payment and print a receipt. 
  • openDoctor: Automated appointment notifications through SMS and email which replaces lists of confirmation calls and reduces missed appointments. 
  • InterFAX by Upland: Integrated digital workflow for inbound (available soon) and outbound faxes, reducing the need for manual acceptance and processing of referral or report faxes. 

Mobile Applications Are Building a Patient-Centric Experience

Protecting patient data is business-critical for all medical practices, as it is for RamSoft. We’re using Microsoft Azure Cloud to ensure all data and applications are secure.

  • Workflow optimization in medical imaging can significantly impact the patient experience, leading to increased loyalty and satisfaction. 

Is Your Practice Operating Optimally?

Explore how RamSoft’s new automation applications, including patient engagement tools, integrated with cloud-based RIS/PACS can improve operations and profitability of your practice. 

Learn more on the company’s website or book a demo at RSNA 2024 for booth #6513 in the North Hall.  

Advances in AI-Automated Echocardiography with Us2.ai

Echocardiography is a pillar of cardiac imaging, but it is operator-dependent and time-consuming to perform. In this interview, The Imaging Wire spoke with Seth Koeppel, Head of Business Development, and José Rivero, MD, RCS, of echo AI developer Us2.ai about how the company’s new V2 software moves the field toward fully automated echocardiography. 

The Imaging Wire: Can you give a little bit of background about Us2.ai and its solutions for automated echocardiography? 

Seth Koeppel: Us2.ai is a company that originated in Singapore. The first version of the software (Us2.V1) received its FDA clearance a little over two years ago for an AI algorithm that automates the analysis and reporting on echocardiograms of 23 key measurements for the evaluation of diastolic and systolic function. 

In April 2024 we received an expanded regulatory clearance for more measurements – now a total of 45 measurements are cleared. When including derived measurements, based on those core 45 measurements, now up to almost 60 measurements are fully validated and automated, and with that Us2.V2 is bordering on full automation for echocardiography.

The application is vendor-agnostic – we basically can ingest any DICOM image and in two to three minutes produce a full report and analysis. 

The software replicates what the expert human does during the traditional 45-60 minutes of image acquisition and annotation in echocardiography. Typically, echocardiography involves acquiring images and video at 40 to 60 frames per second, resulting in some cases up to 100 individual images from a two- or three-second loop. 

The human expert then scrolls through these images to identify the best end-diastolic and end-systolic frames, manually annotating and measuring them, which is time-consuming and requires hundreds of mouse clicks. This process is very operator-dependent and manual.

And so the advantage the AI has is that it will do all of that in a fraction of the time, it will annotate every image of every frame, producing more data, and it does it with zero variability. 

The Imaging Wire: AI is being developed for a lot of different medical imaging applications, but it seems like it’s particularly important for echocardiography. Why would you say that is? 

José Rivero: It’s well known that healthcare institutions and providers are dealing with a larger number of patients and more complex cases. Echo is basically a pillar of cardiac imaging and really touches every patient throughout the path of care. We bring efficiency to the workflow and clinical support for diagnosis and treatment and follow-ups, directly contributing to enhanced patient care.

Additionally, the variability is a huge challenge in echo, as it is operator-dependent. Much of what we see in echo is subjective, certain patient populations require follow-up imaging, and for such longitudinal follow-up exams you want to remove the inter-operator variability as much as possible.

Seth Koeppel: Echo is ripe for disruption. We are faced with a huge shortage of cardiac sonographers. If you simply go on Indeed.com and you type in “cardiac sonographer,” there’s over 4,000 positions open today in the US. Most of those have somewhere between a $10,000, $15,000, up to $20,000 signing bonus. It is an acute problem.

We’re very quickly approaching a situation where we’re running huge backlogs – months in some situations – to get just a baseline echo. The gold standard for diagnosis is an echocardiogram. And if you can’t perform them, you have patients who are going by the wayside. 

In our current system today, the average tech will do about eight echoes a day. An echo takes 45 to 60 minutes, because it’s so manual and it relies on expert humans. For the past 35 years echo has looked the same, there has been no innovation, other than image quality has gotten better, but at same time more parameters were added, resulting in more things to analyze in that same 45 or 60 minutes. 

This is the first time that we can think about doing echo in less than 45 to 60 minutes, which is a huge enhancement in throughput because it addresses both that shortage of cardiac sonographers and the increasing demand for echo exams. 

It also represents a huge benefit to sonographers, who often suffer repetitive stress injuries due to the poor ergonomics of echo, holding the probe tightly pressed against the patient’s chest in one hand, and the other hand on the cart scrolling/clicking/measuring, etc., which results in a high incidence of repetitive stress injuries to neck, shoulder, wrists, etc. 

Studies have shown that 20-30% of techs leave the field due to work-related injury. If the AI can take on the role of making the majority of the measurements, in essence turning the sonographer into more of an “editor” than a “doer,” it has the potential to significantly reduce injury. 

Interestingly, we saw many facilities move to “off-cart” measurements during COVID to reduce the time the tech was exposed to the patient, and many realized the benefits and maintained this workflow, which we also see in pediatrics, as kids have a hard time lying on the table for 45 minutes. 

So with the introduction of AI in the echo workflow, the technicians acquire the images in 15/20 minutes and, in real-time, the images processed via the AI software are all automatically labeled, annotated, and measured. Within 2-3 minutes, a full report is available for the tech to review, adjust (our measures are fully editable) and confirm, and sign off on the report. 

You can immediately see the benefits of reducing the time the tech has the probe in their hand and the patient spends on the table, and the tech then gets to sit at an ergonomically correct workstation (proper keyboard, mouse, large monitors, chair, etc.) and do their reporting versus on-cart, which is where the injuries occur. 

It’s a worldwide shortage, it’s not just here in the US, we see this in other parts of the world, waitlist times to get an echo could be eight, 10, 12, or more months, which is just not acceptable.

The OPERA study in the UK demonstrated that the introduction of AI echo can tackle this issue. In Glasgow, the wait time for an echo was reduced from 12 months to under six weeks. 

The Imaging Wire: You just received clearance for V2, but your V1 has been in the clinical field for some time already. Can you tell us more about the feedback on the use of V1 by your customers.

José Rivero: Clinically, the focus of V1 was heart failure and pulmonary hypertension. This is a critical step, because with AI, we could rapidly identify patients with heart failure or pulmonary hypertension. 

One big step that has been taken by having the AI hand-in-hand with the mobile device is that you are taking echocardiography out of the hospital. So you can just go everywhere with this technology. 

We demonstrated the feasibility of new clinical pathways using AI echo out of the hospital, in clinics or primary care settings, including novice screening1, 2 (no previous experience in echocardiography but supported by point-of-care ultrasound including AI guidance and Us2.ai analysis and reporting).

Seth Koeppel: We’re addressing the efficiency problem. Most people are pegging the time savings for the tech on the overall echo somewhere around 15 to 20 minutes, which is significant. In a recent study done in Japan using the Us2.ai software by a cardiologist published in the Journal of Echocardiography, they had a 70% reduction in overall time for analysis and reporting.3 

The Imaging Wire: Let’s talk about version 2 of the software. When you started working on V2, what were some of the issues that you wanted to address with that?

Seth Koeppel: Version 1, version 2, it’s never changed for us, it’s about full automation of all echo. We aim to automate all the time-consuming and repetitive tasks the human has to do – image labeling and annotation, the clicks, measurements, and the analysis required.

Our medical affairs team works closely with the AI team and the feedback from our users to set the roadmap for the development of our software, prioritizing developments to meet clinical needs and expectations. In V2, we are now covering valve measurements and further enhancing our performance on HFpEF, as demonstrated now in comparison to the gold standard, pulmonary capillary wedge pressure (PCWP)4.

A new version is really about collaborating with leading institutions and researchers, acquiring excellent datasets for training the models until they reach a level of performance producing robust results we can all be confident in. Beyond the software development and training, we also engage in validation studies to further confirm the scientific efficiency of these models.

With V2 we’re also moving now into introducing different protocols, for example, contrast-enhanced imaging, which in the US is significant. We see in some clinics upwards of 50% to 60% use of contrast-enhanced imaging, where we don’t see that in other parts of the world. Our software is now validated for use with ultrasound-enhancing agents, and the measures correlate well.

Stress echo is another big application in echocardiography. So we’ve added that into the package now, and we’re starting to get into disease detection or disease prediction. 

As well as for cardiac amyloidosis (CA), V2 is aligned with guidelines-based measurements for identification of CA in patients, reporting such measurements when found, along with the actual guideline recommendations to support the identification of such conditions which could otherwise be missed 

José Rivero: We are at a point where we are now able to really go into more depth into the clinical environment, going into the echo lab itself, to where everything is done and where the higher volumes are. Before we had 23 measurements, now we are up to 45. 

And again, that can be even a screening tool. If we start thinking about even subdividing things that we do in echocardiography with AI, again, this is expanding to the mobile environment. So there’s a lot of different disease-based assessments that we do. We are now a more complete AI echocardiography assessment tool.

The Imaging Wire: Clinical guidelines are so important in cardiac imaging and in echocardiography. Us2.ai integrates and refers to guideline recommendations in its reporting. Can you talk about the importance of that, and how you incorporate this in the software?

José Rivero: Clinical guidelines play a crucial role in imaging for supporting standardized, evidence-based practice, as well as minimizing risks and improving quality for the diagnosis and treatment of patients. These are issued by experts, and adherence to guidelines is an important topic for quality of care and GDMT (guideline-directed medical therapies).

We are a scientifically driven company, so we recognize that international guidelines and recommendations are of utmost importance; hence, the guidelines indications are systematically visible and discrepant values found in measurements clearly highlighted.

Seth Koeppel: The beautiful thing about AI in echo is that echo is so structured that it just lends itself so perfectly to AI. If we can automate the measurements, and then we can run them through all the complicated matrices of guidelines, it’s just full automation, right? It’s the ability to produce a full echo report without any human intervention required, and to do it in a fraction of the time with zero variability and in full consideration for international recommendations.

José Rivero: This is another level of support we provide, the sonographer only has to focus on the image acquisition, the cardiologist doing the overreading and checking the data will have these references brought up to his/her attention

With echo you need to include every point in the workflow for the sonographer to really focus on image acquisition and the cardiologist to do the overreading and checking the data. But in the end, those two come together when the cardiologist and the sonographers realize that there’s efficiency on both ends. 

The Imaging Wire: V2 has only been out for a short time now but has there been research published on use of V2 in the field and what are clinicians finding?

Seth Koeppel: In V1, our software included a section labeled “investigational,” and some AI measurements were accessible for research purposes only as they had not yet received FDA clearance.

Opening access to these as investigational-research-only has enabled the users to test these out and confirm performance of the AI measurements in independently led publications and abstracts. This is why you are already seeing these studies out … and it is wonderful to see the interest of the users to publish on AI echo, a “trust and verify” approach.

With V2 and the FDA clearance, these measurements, our new features and functionalities, are available for clinical use. 

The Imaging Wire: What about the economics of echo AI?

Seth Koeppel: Reimbursement is still front and center in echo and people don’t realize how robust it is, partially due to it being so manual and time consuming. Hospital echo still reimburses nearly $500 under HOPPS (Hospital Outpatient Prospective Payment System). Where compared to a CT today you might get $140 global, MRI $300-$350, an echo still pays $500. 

When you think about the dynamic, it still relies on an expert human that makes typically $100,000 plus a year with benefits or more. And it takes 45 to 60 minutes. So the economics are such that the reimbursement is held very high. 

But imagine if you can do incrementally two or three more echoes per day with the assistance of AI, you can immediately see the ROI for this. If you can simply do two incremental echoes a day, and there’s 254 days in a working year, that’s an incremental 500 echoes. 

If there’s 2,080 hours in a year, and we average about an echo every hour, most places are producing about 2,000 echoes, now you’re taking them to 2,500 or more at $500, that’s an additional $100k per tech. Many hospitals have 8-10 techs scanning in any given day, so it’s a really compelling ROI. 

This is an AI that really has both a clinical benefit but also a huge ROI. There’s this whole debate out there about who pays for AI and how does it get paid for? This one’s a no brainer.

The Imaging Wire: If you could step back and take a holistic view of V2, what benefits do you think that your software has for patients as well as hospitals and healthcare systems?

Seth Koeppel: It goes back to just the inefficiencies of echo – you’re taking something that is highly manual, relies on expert humans that are in short supply. It’s as if you’re an expert craftsman, and you’ve been cutting by hand with a hand tool, and then somebody walks in and hands you a power tool. We still need the expert human, who knows where to cut, what to cut, how to cut. But now somebody has given him a tool that allows him to just do this job so much more efficiently, with a higher degree of accuracy. 

Let’s take another example. Strain is something that has been particularly difficult for operators because every vendor, every cart manufacturer, has their own proprietary strain. You can’t compare strain results done on a GE cart to a Philips cart to a Siemens cart. It takes time, you have to train the operators, you have human variability in there. 

In V2, strain is now included, it’s fully automated, and it’s vendor-neutral. You don’t have to buy expensive upgrades to carts to get access to it. So many, many problems are solved just in that one simple set of parameters. 

If we put it all together and look at the potential of AI echo, we can address the backlog, allow for more echo to be done in the echo lab but also in primary care settings and clinics where AI echo opens new pathways for screening and detection of heart failure and heart disease at an early stage, early detection for more efficient treatment.

This helps facilities facing the increasing demand for echo support and creates efficient longitudinal follow-up for oncology patients or populations at risk.

In addition, we can open access to echo exams in parts of the world which do not have the expensive carts nor the expert workforce available and deliver on our mission to democratize echocardiography.

José Rivero: I would say that V2 is a very strong release, which includes contrast, stress echo, and strain. I would love to see all three, including all whatever we had on V1, to be mainstream, and see the customer satisfaction with this because I think that it does bring a big solution to the echo world. 

The Imaging Wire: As the year progresses, what else can we look forward to seeing from Us2.ai?

José Rivero: In the clinical area, we will continue our work to expand the range of measurements and validate our detection models, but we are also very keen to start looking into pediatric echo.

Seth Koeppel: Our user interface has been greatly improved in V2 and this is something we really want to keep focus on. We are also working on refining our automated reporting to include customization features, perfecting the report output to further support the clinicians reviewing these, and integrating LLM models to make reporting accessible for non-experts HCP and the patients themselves. 

REFERENCES

  1. Tromp, J., Sarra, C., Bouchahda Nidhal, Ben Messaoud Mejdi, Fourat Zouari, Hummel, Y., Khadija Mzoughi, Sondes Kraiem, Wafa Fehri, Habib Gamra, Lam, C. S. P., Alexandre Mebazaa, & Faouzi Addad. (2023). Nurse-led home-based detection of cardiac dysfunction by ultrasound: Results of the CUMIN pilot study. European Heart Journal. Digital Health.
  2. Huang, W., Lee, A., Tromp, J., Loon Yee Teo, Chandramouli, C., Choon Ta Ng, Huang, F., Carolyn S.P. Lam, & See Hooi Ewe. (2023). Point-of-care AI-assisted echocardiography for screening of heart failure (HANES-HF). Journal of the American College of Cardiology, 81(8), 2145–2145. 
  3. Hirata, Y., Nomura, Y., Yoshihito Saijo, Sata, M., & Kusunose, K. (2024). Reducing echocardiographic examination time through routine use of fully automated software: a comparative study of measurement and report creation time. Journal of Echocardiography
  4. Hidenori Yaku, Komtebedde, J., Silvestry, F. E., & Sanjiv Jayendra Shah. (2024). Deep learning-based automated measurements of echocardiographic estimators invasive pulmonary capillary wedge pressure perform equally to core lab measurements: results from REDUCE LAP-HF II. Journal of the American College of Cardiology, 83(13), 316–316.

Accessing Quality Data for AI Training

One of the biggest roadblocks in medical AI development is the lack of high-quality, diverse data for these technologies to train on.

What Is the Issue with Data Access?

Artificial Intelligence (AI) has emerged as a game-changer in the realm of medical imaging, with immense potential to revolutionize clinical practices. AI-powered medical imaging can efficiently identify intricate patterns within data and provide quantitative assessments of disease biomarkers. This technology not only enhances the accuracy of diagnosis but can also significantly speed up the diagnostic process, ultimately improving patient outcomes.

While the landscape is promising, medical innovators grapple with challenges in accessing high-quality, diverse, and timely data, which is vital for training AI and driving progress.

A 2019 study from the Massachusetts Institute of Technology found that over half of medical AI studies predominantly relied on databases from high-income countries, particularly the United States and China. If models trained on homogenous data are used clinically in diverse populations, then it could pose a risk to patients and worsen health inequalities experienced by underrepresented groups. In the United States, If the Food and Drug Administration deems these risks to be too high, then they could even reject a product’s application for approval. 

In trying to get hold of the best training data, AI developers, particularly startups and individual researchers, face a web of complexities, including legal, ethical, and technical considerations. Issues like data privacy, security, interoperability, and data quality compound these challenges, all of which are crucial in the effective and responsible utilization of healthcare data.

One company working to overcome these hurdles in hope of accelerated and high-quality innovations is Gradient Health.

Gradient Health’s Approach

Gradient Health offers AI developers instant access to one of the world’s largest libraries of anonymized medical images, sourced from hundreds of global hospitals, clinics, and research centers. This data is meticulously de-identified for compliance and can be tailored by vendors to suit their project’s needs and exported in machine learning-ready DICOM + JSON formats.

By partnering with Gradient Health, innovators can use these extensive, diverse datasets to train and validate their AI algorithms, mitigating bias in medical AI and advancing the development of precise, high-quality medical solutions.

Gaining access to top-tier data at the outset of the development process promises long-term benefits. Here’s how:

  • Expand Market Presence: Access the latest cross-vendor datasets to develop medical innovations, expanding your market share.
  • Global Expansion: Enter new regions swiftly with locally sourced data from your target markets, accelerating your global reach.
  • Competitive Edge: Obtain on-demand training data for imaging modalities and disease areas, facilitating product portfolio expansion.
  • Speed to Market: Quickly acquire data for product training and validation, reducing sourcing time and expediting regulatory clearances for faster patient delivery.

“After looking for a data provider for many weeks, I was not able to get even a sample delivery within one month. I was immensely glad to work with Gradient and go from first contact to final delivery within one week!” said Julien Schmidt, chief operations officer and co-founder at Mango Medical.

The Outlook

In recent years, medical AI has experienced significant growth. Innovations in medical imaging in particular have played a pivotal role in enabling healthcare professionals to identify diseases earlier and more accurately in patients with a range of conditions. 

Gradient Health offers a data-compliant, intuitive platform for AI developers, facilitating access to the essential data required to train these critical technologies. This approach holds the potential to save time, resources, and, most importantly, lives. 

More information about Gradient Health is available on the company’s website. They will also be exhibiting at RSNA 2023 in booth #5149 in the South Hall.

Autonomous AI for Medical Imaging is Here. Should We Embrace It?

What is autonomous artificial intelligence, and is radiology ready for this new technology? In this paper, we explore one of the most exciting autonomous AI applications, ChestLink from Oxipit. 

What is Autonomous AI? 

Up to now, most interpretive AI solutions have focused on assisting radiologists with analyzing medical images. In this scenario, AI provides suggestions to radiologists and alerts them to suspicious areas, but the final diagnosis is the physician’s responsibility.

Autonomous AI flips the script by having AI run independently of the radiologist, such as by analyzing a large batch of chest X-ray exams for tuberculosis to screen out those certain to be normal. This can significantly reduce the primary care workload, where healthcare providers who offer preventive health checkups may see up to 80% of chest X-rays with no abnormalities. 

Autonomous AI frees the radiologist to focus on cases with suspicious pathology – with the potential of delivering a more accurate diagnosis to patients in real need.

One of the first of this new breed of autonomous AI is ChestLink from Oxipit. The solution received the CE Mark in March 2022, and more than a year later it is still the only AI application capable of autonomous performance. 

How ChestLink Works

ChestLink produces final chest X-ray reports on healthy patients with no involvement from human radiologists. The application only reports autonomously on chest X-ray studies where it is highly confident that the image does not include abnormalities. These studies are automatically removed from the reporting workflow. 

ChestLink enables radiologists to report on studies most likely to have abnormalities. In current clinical deployments, ChestLink automates 10-30% of all chest X-ray workflow. The exact percentage depends on the type of medical institution, with primary care facilities having the most potential for automation.

ChestLink Clinical Validation

ChestLink was trained on a dataset with over 500k images. In clinical validation studies, ChestLink consistently performed at 99%+ sensitivity.

A recent study published in Radiology highlighted the sensitivity of the application.

“The most surprising finding was just how sensitive this AI tool was for all kinds of chest disease. In fact, we could not find a single chest X-ray in our database where the algorithm made a major mistake. Furthermore, the AI tool had a sensitivity overall better than the clinical board-certified radiologists,” said study co-author Louis Lind Plesner, MD, from the Department of Radiology at the Herlev and Gentofte Hospital in Copenhagen, Denmark.

In this study ChestLink autonomously reported on 28% of all normal studies.

In another study at the Oulu University Hospital in Finland, researchers concluded that AI could reliably remove 36.4% of normal chest X-rays from the reporting workflow with a minimal number of false negatives, leading to effectively no compromise on patient safety. 

Safe Path to AI Autonomy

Oxipit ChestLink is currently used in healthcare facilities in the Netherlands, Finland, Lithuania, and other European countries, and is in the trial phase for deployment in one of the leading hospitals in England.

ChestLink follows a three-stage framework for clinical deployment.

  • Retrospective analysis. ChestLink analyzes a couple of years worth (100k+) of historic chest x-ray studies at the medical institution. In this analysis the product is validated on real-world data. It also realistically estimates what fraction of reporting scope can be automated.
  • Semi-autonomous operations. The application moves into prospective settings, analyzing images in near-real time. ChestLink produces preliminary reports for healthy patients, which may then be approved by a certified clinician.
  • Autonomous operations. The application autonomously reports on high-confidence healthy patient studies. The application performance is monitored in real-time with analytical tools.

Are We There Yet?

ChestLink aims to address the shortage of clinical radiologists worldwide, which has led to a substantial decline in care quality.

In the UK, the NHS currently faces a massive 33% shortfall in its radiology workforce. Nearly 71% of clinical directors of UK radiology departments feel that they do not have a sufficient number of radiologists to deliver safe and effective patient care.

ChestLink offers a safe pathway into autonomous operations by automating a significant and somewhat mundane portion of radiologist workflow without any negative effects for patient care. 

So should we embrace autonomous AI? The real question should be, can we afford not to? 

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!