iSono Health’s Wearable Breast Ultrasound

iSono Health announced the FDA clearance of its ATUSA automated wearable 3D breast ultrasound system, a first-of-its-kind device that taps into some of the biggest trends in imaging.

The wearable ATUSA system automatically captures the entire breast volume, producing standardized/repeatable breast ultrasound exams in two minutes without requiring a trained operator. The scanner combines with iSono’s ATUSA Software Suite to support real-time 2D visualization, advanced 3D visualization and localization, and AI integration (including iSono’s forthcoming AI tools). That positions the ATUSA for a range of interesting use cases:

  • Enhancing routine exams in primary care and women’s health clinics
  • Expanding breast imaging access in developing countries
  • Supporting longitudinal monitoring for higher-risk women
  • Allowing remote breast cancer monitoring

iSono might have to overcome some pretty big biases regarding how and where providers believe breast exams are supposed to take place. However, the ATUSA’s intended use cases and value propositions have already been gaining momentum across imaging.

  • The rapid expansion of handheld POCUS systems and AI guidance solutions has made ultrasound an everyday tool for far more clinicians than just a few years ago.
  • Wearable imaging continues to be an innovation hotspot, including a range of interesting projects that are developing imaging helmets, patches, and even a few other wearable breast ultrasound systems.
  • There’s a growing focus on addressing the developing world’s imaging gap with portable imaging systems.
  • We’re seeing greater momentum towards technology-enabled enhancements to routine breast exams, including Siemens Healthineers’ recent move to distribute UE LifeSciences’ iBreastExam device (uses vibrations, not imaging).
  • At-home imaging is becoming a far more realistic idea, with commercial initiatives from companies like Butterfly and Pulsenmore in place, and earlier-stage efforts from other breast ultrasound startups. 

The Takeaway

iSono Health has a long way to go before it earns an established role in breast cancer pathways. However, the ATUSA’s use cases and value proposition are well aligned with some of imaging’s biggest trends, and there’s still plenty of demand to improve breast imaging access and efficiency across the world.

Chest Pain Implications

The major cardiac imaging societies weighed-in on the AHA/ACC’s new Chest Pain Guidelines, highlighting the notable shifts coming to cardiac imaging, and the adjustments they could require.

The cardiac CT and MRI societies took a victory lap, highlighting CCTA and CMR’s now-greater role in chest pain diagnosis, while forecasting that the new guideline will bring:

  • Increased demand for cardiac CT & MR exams and scanners
  • A need for more cardiac CT & MR staff, training, and infrastructure
  • Requests for more cardiac CT & MR funding and reimbursements
  • More collaborations across radiology, cardiology, and emergency medicine

The angiography and nuclear cardiology societies were less celebratory. Rather than warning providers to start buying more scanners and training more techs (like CT & MR), they focused on defending their roles in chest pain diagnosis, reiterating their advantages, and pointing out how the new guidelines might incorrectly steer patients to unnecessary or insufficient tests.

FFR-CT’s new role as a key post-CT diagnostic step made headlines when the guidelines came out, but the cardiac imaging societies don’t seem to be ready to welcome the AI approach. The nuclear cardiology and radiology societies called out FFR-CT’s low adoption and limited supporting evidence, while the SCCT didn’t even mention FFR-CT in its statement (and they’re the cardiac CT society!).

Echocardiography maintained its core role in chest pain diagnosis, but the echo society clearly wanted more specific guidelines around who can perform echo and how well they’re trained to perform those exams. That reaction is understandable given the sonographer workforce challenges and the expansion of cardiac POCUS to new clinical roles (w/ less echo training), although some might argue that echo AI tools might help address these problems.

The Takeaway

Imaging and shared decision-making play a prominent role in the new chest pain guidelines, which seems like good news for patient-specific care (and imaging department/vendor revenues), but it also leaves room for debate within the clinic and across clinical societies. 

The JACC seems to understand that it needs to clear up many of these gray areas in future versions of the chest pain guidelines. Until then, it will be up to providers to create decision-making and care pathways that work best for them, and evolve their teams and technologies accordingly.

Chest CT’s Untapped Potential

A new AJR study out of Toronto General Hospital highlighted the largely-untapped potential of non-gated chest CT CAC scoring, and the significant impact it could have with widespread adoption.

Current guidelines recommend visual CAC evaluations with all non-gated non-contrast chest CTs. However, these guidelines aren’t consistently followed and they exclude contrast-enhanced chest CTs.

The researchers challenged these practices, performing visual CAC assessments on 260 patients’ non-gated chest CT exams (116 contrast-enhanced, 144 non-contrast) and comparing them to the same patients’ cardiac CT CAC scores (performed within 12-months) and ~6-year cardiac event outcomes.

As you might expect, visual contrast-enhanced and non-contrast chest CT CAC scoring:  

  • Detected CAC with high sensitivity (83% & 90%) and specificity (both 100%)
  • Accurately predicted major cardiac events (Hazard ratios: 4.5 & 3.4)
  • Had relatively benign false negatives (0 of 26 had cardiac events)
  • Achieved high inter-observer agreement (κ=0.89 & 0.95)

The Takeaway

Considering that CAC scores were only noted in 37% of the patients’ original non-contrast chest CT reports and 23% of their contrast-enhanced chest CT reports, this study adds solid evidence in favor of more widespread CAC score reporting in non-gated CT exams.

That might also prove to be good news for the folks working on opportunistic CAC AI solutions, noting that AI has (so far) seen the greatest adoption when it supports processes that most radiologists are actually doing.

Radiology’s Smart New Deal

A new Journal of Digital Imaging editorial from UCLA radiology chair Dieter R. Enzmann, MD proposed a complete overhaul of how radiology reports are designed and distributed, in a way that should make sense to radiology outsiders but might make some folks within radiology uncomfortable.

Dr. Enzmann’s “Smart New Deal” proposes that radiology reports and reporting workflows should evolve to primarily support smartphone-based usage for both patients and physicians, ensuring that reports are:

  • Widely accessible 
  • Easily navigated and understood 
  • Built with empathy for current realities (info overload, time scarcity, mobility)
  • And widely utilized… because they are accessible, simple, and understandable

To achieve those goals, Dr. Enzmann proposes a “creative destruction” of our current reporting infrastructure, helped by ongoing improvements in foundational technologies (e.g. cloud, interoperability) and investments from radiology’s tech leaders (or from their future disruptors).

Despite Dr. Enzmann’s impressive credentials, the people of radiology might have a hard time coming to terms with this vision, given that:

  • Radiology reports are mainly intended for referring physicians, and referrers don’t seem to be demanding simplified phone-native reports (yet)
  • This is a big change given how reports are currently formatted and accessed
  • Patient-friendly features that require new labor often face resistance
  • It might make more sense for this smartphone-centric approach to cover patients’ entire healthcare journeys (not just radiology reports)

The Takeaway

It can be hard to envision a future when radiology reports are primarily built for smartphone consumption.

That said, few radiologists or rad vendors would argue against other data-based industries making sure their products (including their newsletters) are accessible, understandable, and actionable. Many might also recognize that some of the hottest imaging segments are already smartphone-native (e.g. AI care coordination solutions, PocketHealth’s imaging sharing, handheld POCUS), while some of the biggest trends in radiology focus on making reports easier for patients and referrers to consume.

Smartphone-first reporting might not be a sure thing, but the trends we’re seeing do suggest that efforts to achieve Dr. Enzmann’s core reporting goals will be rewarded no matter where technology takes us.

Cleerly’s Downstream Effect

A new AJR study showed that Cleerly’s coronary CTA AI solution detects obstructive coronary artery disease (CAD) more accurately than myocardial perfusion imaging (MPI), and could substantially reduce unnecessary invasive angiographies. 

The researchers used Cleerly to analyze Coronary CTAs from 301 patients with stable myocardial ischemia symptoms who also received stress MPI exams. They then compared these Cleerly CCTA and MPI results with the patients’ invasive angiography exams, and quantitative coronary angiography (QCA) and fractional flow reserve (FFR) measurements. 

The Cleerly-based coronary CTA results significantly outperformed MPI for predicting stenosis and caught cases that MPI-based ischemia results didn’t flag:

  • Cleerly AI detected more patients with obstructive stenosis (≥50%; 0.88 vs. 0.66 AUCs)
  • Cleerly AI identified more patients with severe stenosis (≥70%; 0.92 vs. 0.81 AUCs)
  • Cleerly AI detected far more patients with signs of ischemia in FFR (<0.80; 0.90 vs. 0.71 AUCs) 
  • Out of 102 patients with negative MPI ischemia results, Cleerly identified 55 patients with obstructive stenosis and 20 with severe stenosis (54% & 20%)
  • Out of 199 patients with positive MPI ischemia results, Cleerly identified 46 patients with non-obstructive stenosis (23%)

MPI and Cleerly-based CCTA analysis also worked well together. The combination of ≥50% stenosis via Cleerly and ischemia in MPI achieved 95% sensitivity and 63% specificity for detecting serious stenosis (vs. 74% & 43% using QCA measurements).

Based on those results, pathways that use a Cleerly AI-based CCTA benchmark of ≥70% stenosis to approve patients for invasive angiography would reduce invasive angiography utilization by 39%. Meanwhile, workflows requiring a positive MPI ischemia result and CCTA Cleerly AI benchmark of ≥70% would reduce invasive angiography utilization by 49%.

The Takeaway
We’re seeing strong research and policy momentum towards using coronary CTA as the primary CAD diagnosis method and reducing reliance on invasive angiography. This and other recent studies suggest that CCTA AI solutions like Cleerly could play a major role in that CCTA-first shift.

NYU’s Video Reporting Experiment

A new AJR study out of NYU just provided what might be the first significant insights into how patient-friendly video reports might impact radiologists and patients.

Leveraging a new Visage 7 video feature and 3D rendering from Siemens Healthineers, NYU organized a four-month study that encouraged and evaluated patient-centered video reports (w/ simple video + audio explanations). 

During the study period, just 105 out of 227 NYU radiologists created videos, resulting in 3,763 total video reports. The videos were included within NYU’s standard radiology reports and made available via its patient portal.

The video reports added an average of 4 minutes of recording time to radiologists’ workflows (± 2:21), with abnormal reports understandably taking longer than normal reports (5:30 vs. 4:15; still statistically similar). The authors admitted that video creation has to get faster in order to achieve clinical adoption, revealing plans to use standardized voice macros to streamline this process.

Patients viewed just 864 unique video reports, leaving 2,899 videos unviewed. However, when NYU moved the video links above the written section late in the study period, the share of patients who watched their videos jumped from 20% to 40%. Patients who watched the videos also really liked them:

  • Patients scored their overall video report experiences a 4.7 out of 5
  • The videos’ contribution to patients’ diagnostic understanding also scored 4.7 of 5
  • 56% of patients reported reduced anxiety due to the videos (via 1% increased) 
  • 91% of patients preferred video + written reports (vs. 2% w/ written-only)

Although not the videos’ intended audience, referring physicians viewed 214 unique video reports, and anecdotes suggested that the videos helped referrers explain findings to their patients.

The Takeaway

We’ve covered plenty of studies showing that patients want to review their radiology reports, but struggle to understand them. We’ve also seen plenty of suggestions that radiologists want to improve their visibility to patients and highlight their role in patient care.

This study shows that video reports could satisfy both of those needs, while confirming that adopting video reporting wouldn’t require significant infrastructure changes (if your PACS supports video), but they would add four minutes to radiologist reporting workflows.

That doesn’t suggest a major increase in video reporting will come any time soon, especially considering most practices/departments’ focus on efficiency, but it does make future video reporting adoption seem a lot more realistic (or at least possible).

Who Owns LVO AI?

The FDA’s public “reminder” that studies analyzed by AI-based LVO detection tools (CADt) still require radiologist interpretation became one of hottest stories in radiology last week, and although many of us didn’t realize, it made a big statement about how AI-based coordination is changing the way care teams and radiologists work together.

The FDA decided to issue this clarification after finding that some providers were using LVO AI tools to guide their stroke treatment decisions and “might not be aware” that they need to base those decisions on radiologist interpretations. The agency reiterated that these tools are only intended to flag suspicious exams and support diagnostic prioritization, revealing plans to work with LVO AI vendors to make sure everyone understands radiologists’ role in these workflows. 

This story was covered in all the major radiology publications and sparked a number of social media discussions with some interesting theories:

  • One social post suggested that the FDA is preemptively taking a stand against autonomous AI
  • Some posts and articles wondered if AI might be overly-influencing radiologists’ diagnoses
  • The Imaging Wire didn’t even mention care coordination until a reader emailed with a clarification and we went back and edited our initial story

That reader had a point. It does seem like this is more of a care coordination issue than an AI diagnostics issue, considering that:

  • These tools send notifications and images to interventionalist/surgeons before radiologists are able to read the same cases
  • Two of the three leading LVO AI care coordination tools are marketed to everyone on the stroke team except radiologists (go check their sites)
  • Before AI care coordination came along, it would have been hard to believe that stroke team members “might not be aware” that they needed to check radiologist interpretations before making care decisions

The Takeaway

LVO AI care coordination tools have been a huge commercial and clinical success, and care coordination platforms are quickly expanding to new use cases.

That seems like good news for emergency patients and care teams, but as the FDA reminded us last week, it also means that we’re going to need more safeguards to ensure that care decisions are based on radiologists’ diagnoses — even if the AI tool already informed care teams what the diagnosis might be.

Us2.ai Automates Globally

One of imaging AI’s hottest segments just got even hotter with the completion of Us2.ai’s $15M Series A round and the global launch of its flagship echocardiography AI solution. It’s been at least a year since we led-off a newsletter with a funding announcement, but Us2.ai’s unique foundation and the echo AI segment’s rapid evolution makes this a story worth telling…

Us2.ai has already achieved FDA clearance, a growing list of clinical evidence, and key hardware and pharma alliances (EchoNous & AstraZeneca). 

  • The Singapore-based startup also has a unique level of credibility, including co-founders with a history of clinical and business success, and VC support from IHH Healthcare (the world’s second largest health system).
  • With its funding secured, Us2.ai will accelerate its commercial and regulatory expansion, with a focus on driving global clinical adoption (US, Europe, and Asia) and developing new alliances (hardware vendors, healthcare providers, researchers, pharma).

Us2.ai joins a crowded echo AI arena, which might have more commercial-stage vendors than all other ultrasound AI segments combined. In fact, the major echo guidance (Caption Health, UltraSight) and echo reporting (DiA Imaging, Ultromics, Us2.ai) AI startups have already generated more than $180M in combined VC funding and forged a number of major hardware and PACS partnerships.

  • This influx of echo AI startups might be warranted, given echocardiography’s workforce, efficiency, and variability challenges. It might even prove to be visionary if handheld ultrasounds continue their rapid expansion to new users and settings (including primary and at-home care).
  • Us2.ai will have to rely on its reporting advantages to stand out against its well-established competitors, as it is the only vendor to completely automate echo reporting (complete editable/explainable reports in 2 minutes) and analyze every chamber of the heart (vs. just left ventricle with some vendors). 
  • That said, the incumbent echo AI players have big head starts and the impact of Us2.ai’s automation advantage will rely on ultrasound OEMs’ openness to new alliances and (of course) the rate that providers embrace AI for echo reporting.

The Takeaway

Even if many cardiologists and sonographers would have a hard time differentiating the various echo AI solutions, this is a segment that’s showing a high level of product-market fit. That’s more than you can say for most imaging AI segments, and product advancements like Us2.ai’s should improve this alignment. It might even help drive widespread adoption.

The Case for Algorithmic Audits

A new Lancet Digital Health study could have become one of the many “AI rivals radiologists” papers that we see each week, but it instead served as an important lesson that traditional performance tests might not prove that AI models are actually safe for clinical use.

The Model – The team developed their proximal femoral fracture detection DL model using 45.7k frontal X-rays performed at Australia’s Royal Adelaide Hospital (w/ 4,861 fractures).

The Validation – They then tested it against a 4,577-exam internal set (w/ 640 fractures), 400 of which were also interpreted by five radiologists (w/ 200 fractures), and against an 81-image external validation set from Stanford.

The Results – All three tests produced results that a typical study might have viewed as evidence of high-performance: 

  • The model outperformed the five radiologists (0.994 vs. 0.969 AUCs)
  • It beat the best performing radiologist’s sensitivity (95.5% vs. 94.5%) and specificity (99.5% vs 97.5%)
  • It generalized well with the external Stanford data (0.980 AUC)

The Audit – Despite the strong results, a follow-up audit revealed that the model might make some predictions for the wrong reasons, suggesting that it is unsafe for clinical deployment:

  • One false negative X-ray included an extremely displaced fracture that human radiologists would catch
  • X-rays featuring abnormal bones or joints had a 50% false negative rate, far higher than the reader set’s overall false negative rate (2.5%)
  • Salience maps showed that AI decisions were almost never based on the outer region of the femoral neck, even with images where that region was clinically relevant (but it still often made the right diagnosis)
  • The model scored a high AUC with the Stanford data, but showed a substantial model operating point shift

The Case for Auditing – Although the study might have not started with this goal, it ended up becoming an argument for more sophisticated preclinical auditing. It even led to a separate paper outlining their algorithmic auditing process, which among other things suggested that AI users and developers should co-own audits.

The Takeaway

Auditing generally isn’t the most exciting topic in any field, but this study shows that it’s exceptionally important for imaging AI. It also suggests that audits might be necessary for achieving the most exciting parts of AI, like improving outcomes and efficiency, earning clinician trust, and increasing adoption.A new Lancet Digital Health study could have become one of the many “AI rivals radiologists” papers that we see each week, but it instead served as an important lesson that traditional performance tests might not prove that AI models are actually safe for clinical use.

Envisioning A Difficult Future

S&P Global Ratings’ decision to downgrade Envision Healthcare might have been largely overlooked during another busy healthcare news week, but it could prove to be part of one of the biggest stories in healthcare economics.

About Envision – The private equity-backed mega practice employs more than 25k clinicians across hundreds of US hospitals, including roughly 800 radiologists who perform over 10 million reads per year. 

The Downgrade – S&P downgraded Envision Healthcare to ‘CCC’ (from CCC+) and assigned it a ‘Negative’ CreditWatch rating, citing the company’s “inadequate” liquidity, a missed financial filing deadline, and a challenging path forward. Envision owes $700M by October 2023 (and more after that), but S&P expects the company to end 2022 with less than $100M in cash, risking more short-term downgrades and bigger long-term disruptions.

The Background – If you’re wondering how Envision found itself in this situation, a recent Prospect.org exposé has some answers (or at least its version of the answers):

  • When private equity giant KKR acquired Envision in 2018, it burdened the company with billions in debt, including a $5.3B first-lien term loan due in 2025
  • KKR’s initial strategy involved keeping most of Envision’s clinicians out-of-network (and earning higher surprise billing rates), but Envision moved many of its physicians in-network amid public backlash and looming legislation 
  • Ongoing surprise billing legislation spooked investors, causing Envision’s first-lien term loan to trade for 50 cents on the dollar in early 2020, before bouncing back to a somewhat-less-distressed 70-80 cent range later that year
  • The COVID pandemic further strained Envision’s finances, as many of its core specialties saw major volume declines (emergency, anesthesiology, radiology, GI, etc.)
  • Envision avoided bankruptcy thanks to an estimated $100M CARES Act bailout and help from its creditors
  • The final surprise billing legislation turned out to be pretty favorable for Envision, but not as favorable as back in the pre-legislation days
  • As of March 2022, Envision’s $5.3B first-lien term loan was still trading in distressed territory (73 cents), and it has other loans to pay off too

The Path Forward – It’s hard to predict how this will work out for Envision, although Prospect.org suggests that it might involve KKR splitting Envision into two companies. One could be saddled with all the debt and destined for bankruptcy, while the other entity (and KKR) could emerge “unscathed.”

The Takeaway

For many in healthcare this is a cautionary tale about what can go wrong when private equity influences are combined with an over-reliance on a disputed business model (in this case surprise billing) and a global pandemic. It also makes you wonder if other mega practices are in similar situations.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!