University of Chicago researchers provided solid evidence that hybrid multidimensional MRI (HM-MRI) might be superior to multiparametric MRI (mpMRI) for diagnosing clinically significant prostate cancer.
That’s a big statement after nearly two decades of prostate MRI exams, but mpMRI’s continued variability challenges still leave room for improvement, and some believe HM-MRI’s quantitative approach could help add objectivity.
To test that theory, the researchers had four radiologists with different career experience (1 to 20yrs) interpret HM-MRI and mpMRI exams from 61 men with biopsy-confirmed prostate cancer, finding that the HM-MRI exams produced:
- Higher AUCs among three of the four readers (0.61 vs. 0.66; 0.71 vs. 0.60; 0.59 vs. 0.50; 0.64 vs. 0.46), with the least experienced rad achieving the greatest AUC improvement
- Higher specificity among all four readers (48% vs. 37%; 78% vs. 26%; 48% vs. 0%; 46% vs. 7%)
- Significantly greater interobserver agreement rates (Cronbach alpha: 0.88 vs. 0.26; >0.60 indicates reliability)
- Far shorter average interpretation times (73 vs. 254 seconds)
As the study’s editorial put it, HM-MRI appears to be a “quantitative step in the right direction” for prostate MRI, and has the potential to address mpMRI’s variability, accuracy, and efficiency challenges.
Exo took a big step towards making its handheld ultrasounds easier to use and adopt, acquiring AI startup Medo AI. Although unexpected, this is a logical and potentially significant acquisition that deserves a deeper look…
Exo plans to integrate Medo’s Sweep AI technology into its ultrasound platform, forecasting that this hardware-software combination will streamline Exo POCUS adoption among clinicians who lack ultrasound training/experience.
- Medo’s automated image acquisition and interpretation software has clearance for two exams (thyroid nodule assessments, developmental hip dysplasia screening), and it has more AI modules in development.
Exo didn’t disclose acquisition costs, but Medo AI is relatively modest in size (23 employees on LinkedIn, no public info on VC rounds) and it’s unclear if it had any other bidders.
- Either way, Exo can probably afford it following its $220M Series C in July 2021 (total funding now >$320m), especially considering that Medo’s use case directly supports Exo’s core strategy of expanding POCUS to more clinicians.
Some might point out how this acquisition continues 2022’s AI shakeup, which brought three other AI acquisitions (Aidence & Quantib by RadNet; Nines by Sirona) and at least two strategic pivots (MaxQ AI & Kheiron).
- That said, this is the first AI acquisition by a hardware vendor and it doesn’t represent the type of segment consolidation that everyone keeps forecasting.
Exo’s Medo acquisition does introduce a potential shift in the way handheld ultrasound vendors might approach expanding their AI software stack, after historically focusing on a mix of partnerships and in-house development.
Handheld ultrasound is perhaps the only medical imaging product segment that includes an even mix of the industry’s largest OEMs and extremely well-funded startups, setting the stage for fierce competition.
That competition is even stronger when you consider that the handheld ultrasound segment’s primary market (point-of-care clinicians) is still early in its adoption curve, which places a big target on any products that could make handheld ultrasounds easier to use and adopt (like Medo AI).
A new JASE study showed that AI-based echocardiography measurements can be used to predict COVID patient mortality, but manual measurements performed by echo experts can’t. This could be seen as yet another “AI beats humans” study (or yet another COVID AI study), but it also gives important evidence of AI’s potential to reduce echo measurement variability.
Starting with transthoracic echocardiograms from 870 hospitalized COVID patients (13 hospitals, 9 countries, 27.4% who later died), the researchers utilized Ultromics’ EchoGo Core AI solution and a team of expert readers to measure left ventricular ejection fraction (LVEF) and LV longitudinal strain (LVLS). They then analyzed the measurements and applied them to mortality prediction models, finding that the AI-based measurements:
- Were “significant predictors” of patient mortality (LVEF: OR=0.974, p=0.003; LVLS: OR=1.060, p=0.004), while the manual measurements couldn’t be used to predict mortality
- Had significantly less variability than the experts’ manual measurements
- Were similarly “feasible” as manual measurements when applied to the various echo exams
- Showed stronger correlations with other COVID biomarkers (e.g. diastolic blood pressure)
- Combined with other biomarkers to produce even more accurate mortality predictions
The authors didn’t seem too surprised that AI measurements had less variability, or by their conclusion that reducing measurement variability “consequently increased the statistical power to predict mortality.”
They also found that sonographers’ original scanning inconsistency was responsible for nearly half of the experts’ measurement variability, suggesting that a combination of echo guidance AI software (e.g. Caption or UltraSight) with echo reporting AI tools (e.g. Us2.ai or Ultromics) could “further reduce variability.”
Echo AI measurements aren’t about to become a go-to COVID mortality biomarker (clinical factors and comorbidities are much stronger predictors), but this study makes a strong case for echo AI’s measurement consistency advantage. It’s also a reminder that reducing variability improves overall accuracy, which would be valuable for sophisticated prediction models or everyday echocardiography operations.
A new Mass General Brigham study highlighted Annalise.ai’s pneumothorax detection solution’s strong diagnostic performance, including across different pneumothorax types and clinical scenarios.
The researchers used Annalise Enterprise CXR Triage Pneumothorax to “analyze” 985 CXRs (435 positive), detecting simple and tension pneumothorax cases with high accuracy:
- Simple pneumothorax – 0.979 AUC (94.3% sensitivity, 92.0% specificity)
- Tension pneumothorax – 0.987 AUC (94.5% sensitivity, 95.3% specificity)
The study also suggests that Annalise Enterprise CXR should maintain this strong performance when used outside of Mass General, as it surpassed standard accuracy benchmarks (>0.95 AUC, >80% sensitivity & specificity) across nearly all of the study’s clinical scenarios (CXR manufacturer, CXR projection type, patient sex/age/positioning).
The clinical benefits of early pneumothorax detection are clear, so studies like this are good news for the growing number of FDA-approved pneumothorax AI vendors who are working on clinical adoption.
However, this study feels like even better news for Annalise.ai, noting that it is one of the few pneumothorax AI vendors that detects both simple and tension pneumothorax, and considering that Annalise Enterprise CXR is capable of detecting 122 other CXR indications (even if it’s currently only FDA-cleared for pneumothorax).
Mayo Clinic researchers added to the growing field of evidence suggesting that CT radiomics can be used to detect signs of pancreatic ductal adenocarcinoma (PDAC) well before they are visible to radiologists, potentially allowing much earlier and more effective surgical interventions.
The researchers first extracted pancreatic cancer’s radiomics features using pre-diagnostic CTs from 155 patients who were later diagnosed with PDAC and 265 CTs from healthy patients. The pre-diagnostic CTs were performed for unrelated reasons a median of 398 days before cancer diagnosis.
They then trained and tested four different radiomics-based machine learning models using the same internal dataset (training: 292 CTs; testing: 128 CTs), with the top model identifying future pancreatic cancer patients with promising results:
- AUC – 0.98
- Accuracy – 92.2%
- Sensitivity – 95.5%
- Specificity – 90.3%
Interestingly, the same ML model had even better specificity in follow-up tests using an independent internal dataset (n= 176; 92.6%) and an external NIH dataset (n= 80; 96.2%).
Mayo Clinic’s ML radiomics approach also significantly outperformed two radiologists, who achieved “only fair” inter-reader agreement (Cohen’s kappa 0.3) and produced far lower AUCs (rads’ 0.66 vs. ML’s 0.95 – 0.98). That’s understandable, given that these early pancreatic cancer “imaging signatures” aren’t visible to humans.
Although radiomics-based pancreatic cancer detection is still immature, this and other recent studies certainly support its potential to detect early-stage pancreatic cancer while it’s treatable.
That evidence should grow even more conclusive in the future, noting that members of this same Mayo Clinic team are operating a 12,500-patient prospective/randomized trial exploring CT-based pancreatic cancer screening.
A new study out of Austria provided solid evidence that content-based image retrieval systems (CBIRS) enhance radiologists’ reading efficiency, while potentially improving their diagnostic accuracy.
Eight radiologists reviewed chest CTs from 108 patients with suspected diffuse parenchymal lung disease (DPLD), leveraging contextflow’s AI-based SEARCH Lung CT CBIRS with half of the exams.
Using the radiologists’ CT image regions of interest, the CBIRS would search a database of 6,542 chest CTs to identify similar scans, providing the rads with the three most likely disease patterns and supporting information (e.g. a list of potential differential diagnoses). The CBIRS’ added “context” had a notable impact on the radiologists:
- Reducing their average reading time by 31.3% (197 vs. 287 seconds)
- Reducing resident and attending radiologists’ reading time by 27% and 35%
- Improving overall diagnostic accuracy by over 7pts (42.2% vs. 34.7%; not statistically significant)
These reading time reductions came despite the fact that radiologists were more likely to search for additional information when using the CBIRS (72% vs. 43% of cases). That’s partially because CBIRS allowed greater speed improvements when radiologists searched for more information (110 seconds faster vs. without CBIRS) than when rads didn’t search for more info (39 seconds faster).
This study presents a rare example of how imaging AI can significantly improve radiologists’ efficiency, while amplifying their current workflows and diagnostic decision-making processes. It’s also the second study in the last year suggesting that CBIRS might improve diagnostic accuracy, although the authors encourage more research into CBIRS’ accuracy impact to know for sure.
Cathay Life Insurance will use Lunit’s INSIGHT CXR AI solution to identify abnormalities in its applicants’ chest X-rays, potentially modernizing a manual underwriting process and uncovering a new non-clinical market for AI vendors.
Lunit INSIGHT CXR will be integrated into Cathay’s underwriting workflow, with the goals of enhancing its radiologists’ accuracy and efficiency, while improving Cathay’s underwriting decisions.
Lunit and Cathay have reason to be optimistic about this endeavor, given that their initial proof of concept study found that INSIGHT CXR:
- Improved Cathay’s radiologists’ reading accuracy by 20%
- Reduced the radiologists’ overall reading time by up to 90%
Those improvements could have a significant labor impact, considering that Cathay’s rads review 30,000 CXRs every year. They might have an even greater business impact, noting the important role that underwriting accuracy has on policy profitability.
Lunit’s part of the announcement largely focused on its expansion beyond clinical settings, revealing plans to “become the driving force of digital innovation in the global insurance market” and to further expand its business into “various sectors outside the hospital setting.”
Even if life insurers only require CXRs for a small percentage of their applicants (older people, higher value policies), they still review hundreds of thousands of CXRs each year. That makes insurers an intriguing new market segment for AI vendors, and makes you wonder what other non-clinical AI use cases might exist. However, it might also make radiologists who are still skeptical about AI concerned.
The first half of 2022 is now a wrap, and it was another big one within medical imaging. Here are some of the top storylines from the last 6 months and some things to keep in mind as we head into 2022’s second half:
- Imaging Goes Home – Healthcare’s major shift into patient homes seemed to be bringing imaging along with it in H1, leading to new vendor-side efforts focused on at-home ultrasound (e.g. Caption’s home echo program, GE’s Pulsenmore investment), more providers expanding their mobile imaging capabilities, and new research efforts focused on patient-performed exams and mobile imaging operations.
- AI Shakeup – Everyone who has been predicting AI consolidation got to take a victory lap in H1, which brought at least two strategic pivots (MaxQ AI & Kheiron) and the acquisitions of Aidence and Quantib (by RadNet) and Nines (by Sirona). This kind of consolidation is normal for an emerging segment, but it wouldn’t be surprising if the difficult funding climate leads to above-normal consolidation in H2.
- Photon Counting Reality – The momentum from Siemens’ photon counting CT launch in late 2021 carried into this year, leading to a series of studies suggesting that PCCT might be as good as anticipated, the launch of Samsung NeuroLogica’s own head/neck PCCT system, and increased photon counting R&D and marketing efforts from the other major CT OEMs.
- The Patient Engagement Push – The first half seemed to bring a surge in patient engagement activity, including new investments from the major image sharing vendors, increased pressure from radiology leaders to finally achieve universal image sharing, and new efforts to make radiology reports more accessible and understandable.
- The Platform Pathway – The trend towards AI platforms heated up in H1, as new vendors launched or expanded their AI platforms, the major PACS players increased their AI integration efforts, and startups and radiology teams increasingly embraced AI platforms as a solution to their narrow AI challenges.