MGH’s Multimodal Thyroid Ultrasound AI

An MGH and Harvard Medical team developed a multimodal ultrasound AI platform that applies an interesting mix of AI techniques to accurately detect and stage thyroid cancer, potentially improving diagnosis and treatment planning.

The Platform – The platform combines radiomics, topological data analysis (TDA), ML-based TI-RADS assessments, and deep learning, allowing them to capture more data, minimize noise, and improve prediction accuracy.

The Study – Starting with 1,346 ultrasound images from 784 patients, the researchers trained the multimodal AI platform with 362 nodules (103 malignant) and validated it against a pair of internal (51 malignant, 98 benign) and external (270 malignant, 50 benign) datasets, finding that:

  • The platform predicted 98.7% of internal dataset malignancies (0.99 AUC)
  • The platform predicted 91.4% of external dataset malignancies (0.94 AUC)
  • The individual AI methods were far less accurate (80% to 89% w/ internal)
  • A version of the platform accurately predicted nodal pathological stages (93% for T-stage, 89% for N-stage, 98% for extrathyroidal extension)
  • The platform predicted BRAF mutations with 96% accuracy

Next Steps – The researchers plan to validate their multimodal platform in prospective multicenter clinical trials, including in low-resource countries where it might be particularly helpful.


The Takeaway

We cover plenty of ultrasound AI and thyroid cancer imaging studies, but this team’s multi-AI approach is unique and appears promising. A multimodal AI platform like this might make thyroid cancer diagnosis more efficient and less subjective, avoid unnecessary biopsies, allow non-invasive staging and mutation assessment, and lead to more personalized treatments. That would be a major accomplishment, and might suggest that similar multimodal AI platforms could be developed for other cancers and imaging modalities.

Radiology’s AI ROI Mismatch

A thought-provoking JACR editorial by Emory’s Hari Trivedi MD suggests that AI’s slow adoption rate has little to do with its quality or clinical benefits, and a lot to do with radiology’s misaligned incentives.

After interviewing 25 clinical and industry leaders, the radiology professor and co-director of Emory’s HITI Lab detailed the following economic mismatches:

  • Private Practices value AI that improves radiologist productivity, allowing them to increase reading volumes without equivalent increases in headcount. That makes triage or productivity-focused AI valuable, but gives them no economic justification to purchase AI that catches incidentals, ensures follow-ups, or reduces unnecessary biopsies.
  • Academic centers or hospitals that own radiology groups have far more to gain from AI products that detect incidental/missed findings and then drive internal admissions, referrals, and procedures. That means their highest-ROI AI solutions often drive revenue outside of the radiology department, while creating more radiologist labor.
  • Community hospital emergency departments value AI that allows them to discharge or treat emergency patients faster, although this often doesn’t economically benefit their radiology departments or partner practices.
  • Payor/provider health systems (e.g. the VA, Intermountain, Kaiser) can be open to a broad range of AI, but they especially value AI that reduces costs by avoiding unnecessary tests or catching early signs of diseases.


The Takeaway

People tend to paint imaging AI with a wide brush (AI is… all good, all bad, a job stealer, or the future) and we’ve seen a similar approach to AI adoption barrier editorials (AI just needs… trust, reimbursements, integration, better accuracy, or the killer app). However, even if each of these adoption barriers are solved, it’s hard to see how AI could achieve widespread adoption if the groups paying for AI aren’t economically benefiting from it.

Because of that, Dr. Trivedi encourages vendors to develop AI that provides “returns” to the same groups that make the “investments.” That might mean that few AI products achieve widespread adoption on their own, but a diverse group of specialized AI products achieve widespread use across all radiology settings.

Sirona Medical Acquires Nines AI, Talent

Sirona Medical announced its acquisition of Nines’ AI assets and personnel, representing notable milestones for Sirona’s integrated RadOS platform and the quickly-changing imaging AI landscape.

Acquisition Details – Sirona acquired Nines’ AI portfolio (data pipeline, ML engines, workflow/analytics tools, AI models) and key team members (CRO, Direct of Product, AI engineers), while Nines’ teleradiology practice was reportedly absorbed by one of its telerad customers. Terms of the acquisition weren’t disclosed, although this wasn’t a traditional acquisition considering that Sirona and Nines had the same VC investor.

Sirona’s Nines Strategy – Sirona’s mission is to streamline radiologists’ overly-siloed workflows with its RadOS radiology operating system (unifies: worklist, viewer, reporting, AI, etc.), and it’s a safe bet that any acquisition or investment Sirona makes is intended to advance this mission. With that…

  • Nine’s most tangible contributions to Sirona’s strategy are its FDA-cleared AI models: NinesMeasure (chest CT-based lung nodule measurements) and NinesAI Emergent Triage (head CT-based intracranial hemorrhage and mass effect triage). The AI models will be integrated into the RadOS platform, bolstering Sirona’s strategy to allow truly-integrated AI workflows. 
  • Nine’s personnel might have the most immediate impact at Sirona, given the value/scarcity of experienced imaging software engineers and the fact that Nines’ product team arguably has more hands-on experience with radiologist workflows than any other imaging AI firm (at least AI firms available for acquisition).
  • Nine’s other AI and imaging workflow assets should also help support Sirona’s future RadOS and AI development, although it’s harder to assess their impact for now.

The AI Shakeup Angle – This acquisition has largely been covered as another example of 2022’s AI shakeup, which isn’t too surprising given how active this year has been (MaxQ’s shutdown, RadNet’s Aidence/Quantib acquisitions, IBM shedding Watson Health). However, Nines’ strategy to combine a telerad practice with in-house AI development was quite unique and its decision to sell might say more about its specific business model (at its scale) than it does about the overall AI market.

The Takeaway

Since the day Sirona emerged from stealth, it’s done a masterful job articulating its mission to solve radiology’s workflow problems by unifying its IT infrastructure. Acquiring Nines’ AI assets certainly supports Sirona’s unified platform messaging, while giving it more technology and personnel resources to try to turn that message into a reality.

Meanwhile, Nines becomes the latest of surely many imaging AI startups to be acquired, pivoted, or shut down, as AI adoption evolves at a slower pace than some VC runways. Nines’ strategy was really interesting, they had some big-name founders and advisors, and now their work and teams will live on through Sirona.

CheXstray Drift Detection

A Stanford AIMI and Microsoft Healthcare team just took a step towards addressing imaging AI’s looming drift problem, unveiling their CheXstray drift detection system.

Imaging AI’s Drift Problem – The list of FDA-cleared imaging AI products continues to grow and we’re getting better at AI deployment. However, there’s no reasonable way to monitor how imaging AI models adapt to their constantly changing data environments (tech, vendors, protocols, patient & disease mix, etc.) or whether the models change on their own.

The CheXstray Solution – The team used a pair of public CXR datasets (n = 224k & 160k CXRs) to train/test the CheXstray solution to automatically detect drift by calculating a range of multi-modal inputs (DICOM metadata, image appearance, clinical workflows) and model performance. 

CheXstray Results – Initial experiments showed that the automated CheXstray workflows rivaled ground truth audits for drift detection, essentially achieving the workflow’s proof-of-concept goal. 

Automation Alternatives – Until we have automated monitoring solutions like CheXstray, AI vendors and radiology departments might have to rely on ongoing audits (requiring test set curation, labeling, analytics, etc.) and/or asking radiologists to provide ongoing model feedback. Unfortunately, those options undermine AI’s intended labor-reducing value proposition. Plus, radiologists have already made it quite clear that they don’t think monitoring should be their responsibility (and regulators might agree).

The Takeaway
We haven’t solved imaging AI’s drift monitoring problem yet, and there will be other hurdles to overcome before we see a solution like this achieve clinical adoption (more research, regulatory changes, new modalities, training without massive public datasets). Still, the CheXstray team just showed how imaging AI performance could be automatically monitored in real-time. That’s an important step in imaging AI’s evolution, and it might prove to be critical as more hospitals head into the 2nd or 3rd years after their “successful” AI deployments.

Intracranial Hemorrhage AI Efficiency

A new Radiology: Artificial Intelligence study out of Switzerland highlighted how Aidoc’s Intracranial Hemorrhage AI solution improved emergency department workflows, without hurting patient care. Even if that’s exactly what solutions like this are supposed to do, real world AI studies that go beyond sensitivity and specificity are still rare and worth some extra attention.

The Study – The researchers analyzed University Hospital of Basel’s non-contrast CT intracranial hemorrhage (ICH) exams before and after adopting the Aidoc ICH solution (n = 1,433 before & 3,017 after; ~14% ICH incidence w/ both groups).

Diagnostic Results – The Aidoc solution produced “practicable” overall diagnostic results (93% accuracy, 87.2% sensitivity, 93.9% specificity, and 97.8% NPV), although accuracy was lower with certain ICH subtypes (e.g. subdural hemorrhage 69.2%, 74/107). 

Efficiency Results – More notably, the Aidoc ICH solution “positively impacted” UBS’ ED workflows, with improvements across a range of key metrics:

  • Communicating critical findings: 63 vs. 70 minutes
  • Communicating acute ICH: 58 vs. 73 minutes
  • Overall turnaround time to rule out ICH: 164 vs. 175 minutes
  • Turnaround time to rule out ICH during working hours: 167 vs. 205 minutes

Next Steps – The authors called for further efforts to streamline their stroke workflows and to create a clear ICH AI framework, accurately noting that “AI tools are only as reliable as the environment they are deployed in.”

The Takeaway
The internet hasn’t always been kind to emergency AI tools, and academic studies have rarely focused on the workflow efficiency outcomes that many radiologists and emergency teams care about. That’s not the case with this study, which did a good job showing the diagnostic and workflow upsides of ICH AI adoption, and added a nice reminder that imaging teams share responsibility for AI outcomes.

Creating a Cancer Screening Giant

A few days after shocking the AI and imaging center industries with its acquisitions of Aidence and Quantib, RadNet’s Friday investor briefing revealed a far more ambitious AI-enabled cancer screening strategy than many might have imagined.

Expanding to Colon Cancer – RadNet will complete its AI screening platform by developing a homegrown colon cancer detection system, estimating that its four AI-based cancer detection solutions (breast, prostate, lung, colon) could screen for 70% of cancers that are imaging-detectable at early stages.

Population Detection – Once its AI platform is complete, RadNet plans to launch a strategy to expand cancer screening’s role in population health, while making prostate, lung, and colon cancer screening as mainstream as breast cancer screening.

Becoming an AI Vendor – RadNet revealed plans to launch an externally-focused AI business that will lead with its multi-cancer AI screening platform, but will also create opportunities for RadNet’s eRAD PACS/RIS software. There are plenty of players in the AI-based cancer detection arena, but RadNet’s unique multi-cancer platform, significant funding, and training data advantage would make it a formidable competitor.

Geographic Expansion – RadNet will leverage Aidence and Quantib’s European presence to expand its software business internationally, as well as into parts of the US where RadNet doesn’t own imaging centers (RadNet has centers in just 7 states).

Imaging Center Upsides – RadNet’s cancer screening AI strategy will of course benefit its core imaging center business. In addition to improving operational efficiency and driving more cancer screening volumes, RadNet believes that the unique benefits of its AI platform will drive more hospital system joint ventures.

AI Financials – The briefing also provided rare insights into AI vendor finances, revealing that DeepHealth has been running at a $4M-$5M annual loss and adding Aidence / Quantib might expand that loss to $10M- $12M (seems OK given RadNet’s $215M EBITDA). RadNet hopes its AI division will become cash flow neutral within the next few years as revenue from outside companies ramp up.

The Takeaway

RadNet has very big ambitions to become a global cancer screening leader and significantly expand cancer screening’s role in society. Changing society doesn’t come fast or easy, but a goal like that reveals how much emphasis RadNet is going to place on developing and distributing its AI cancer screening platform going forward.

RadNet’s Big AI Play

Imaging center giant RadNet shocked the AI world this week, acquiring Dutch startups Aidence and Quantib to support its AI-enabled cancer screening strategy.

Acquisition Details – RadNet acquired Aidence for $40M-$50M and Quantib for $45M, positioning them alongside DeepHealth within its new AI division. Aidence’s Veye Lung Nodules solution (CT lung nodule detection) is used across seven European countries and has been submitted for FDA 510(k) clearance, while Quantib’s prostate and brain MRI solutions have CE and FDA clearance and are used in 20 countries worldwide.

RadNet’s Cancer Screening Strategy – RadNet sees a huge future for cancer screening and believes Aidence (lung cancer) and Quantib (prostate cancer) will combine with DeepHealth (breast cancer) to make it a population health screening leader. 

RadNet’s AI Screening History – Even if these acquisitions weren’t expected, they aren’t out of character for RadNet, which created its mammography AI portfolio through a series of 2019-2020 acquisitions (DeepHealth, Nulogix) and equity investments (WhiteRabbit.ai). Plus, acquisitions have been a core part of RadNet’s imaging center strategy since before we were even talking about AI.

Unanswered Questions – It’s still unclear whether RadNet will take advantage of Aidence / Quantib’s European presence to expand internationally or if RadNet will start selling its AI portfolio to other hospitals and imaging center chains.

Another Consolidation Milestone – All those forecasts of imaging AI market consolidation seem to be quickly coming true in 2022, following MaxQ’s pivot out of imaging and RadNet’s Aidence / Quantib acquisitions. It’s also becoming clearer what type of returns AI startups and VCs are willing to accept, as Aidance and Quantib sold for about 3.5-times and 5.5-times their respective venture funding ($14M & $8M) and Nanox acquired Zebra-Med for 1.7 to 3.5-times its VC funding ($57.4M).

The Takeaway

It seems that RadNet will leverage its newly-expanded AI portfolio to become the US’ premier cancer screening company. That would be a huge accomplishment if cancer screening volumes grow as RadNet is forecasting. However, RadNet’s combination of imaging AI expertise, technology, funding, and training data could allow it to have an even bigger impact beyond the walls of its imaging centers.

Duke’s Interpretable AI Milestone

A team of Duke University radiologists and computer engineers unveiled a new mammography AI platform that could be an important step towards developing truly interpretable AI.

Explainable History – Healthcare leaders have been calling for explainable imaging AI for some time, but explainability efforts have been mainly limited to saliency / heat maps that show what part of an image influenced a model’s prediction (not how or why).

Duke’s Interpretable Model – Duke’s new AI platform analyzes mammography exams for potentially cancerous lesions to help physicians determine if a patient should receive a biopsy, while supporting its predictions with image and case-based explanations. 

Training Interpretability – The Duke team trained their AI platform to locate and evaluate lesions following a process that human radiology educators and students would utilize:

  • First, they trained the AI model to detect suspicious lesions and to ignore healthy tissues
  • Then they had radiologists label the edges of the lesions
  • Then they trained the model to compare those lesion edges with lesion edges from an archive of images with confirmed outcomes

Interpretable Predictions – This training process allowed the AI model to identify suspicious lesions, highlight the classification-relevant parts of the image, and explain its findings by referencing confirmed images. 

Interpretable Results – Like many AI models, this early version could not identify cancerous lesions as accurately as human radiologists. However, it matched the performance of existing “black box” AI systems and the team was able to see why their AI model made its mistakes.

The Takeaway

It seems like concerns over AI performance are growing at about the same pace as actual AI adoption, making explainability / interpretability increasingly important. Duke’s interpretable AI platform might be in its early stages, but its use of previous cases to explain findings seems like a promising (and straightforward) way to achieve that goal, while improving diagnosis in the process.

AI Disparity Detection

Most studies involving imaging AI and patient race/ethnicity warn that AI might exacerbate healthcare inequalities, but a new JACR study outlines one way that imaging AI could actually improve care for typically underserved patients.

The AI vs. EHR Disparity Problem – The researchers used a deep learning model to detect atherosclerotic disease in CXRs from two cohorts of COVID-positive patients: 814 patients from a suburban ambulatory center (largely White, higher-income) and 485 patients admitted at an inner-city hospital (largely minority, lower-income). 

When they validated the AI predictions versus the patients’ EHR codes they found that:

  • The AI predictions were far more likely to match the suburban patients’ EHR codes than the inner-city patients’ EHR codes (0.85 vs. 0.69 AUCs)
  • AI/EHR discrepancies were far more common among patients who were Black or Hispanic, prefer a non-English language, and live in disadvantaged zip codes

The Imaging AI Solution – This study suggests healthcare systems could use imaging AI-based biomarkers and EHR data to flag patients that might have undiagnosed conditions, allowing them to get these patients into care and identify/address larger systemic barriers. 

The Value-Based Care Justification – In addition to healthcare ethics reasons, the authors noted that imaging/EHR discrepancy detection could become increasingly financially important as we transition to more value-based models. AI/EHR analytics like this could be used to ensure at-risk patients are cared for as early as possible, healthcare disparities are addressed, and value-based metrics are achieved.

The Takeaway – Over the last year we’ve seen population health incidental detection emerge as one of the most exciting imaging AI use cases, while racial/demographic bias emerged as one of imaging AI’s most troubling challenges. This study managed to combine these two topics to potentially create a new way to address barriers to care, while giving health systems another tool to ensure they’re delivering value-based care.

MaxQ AI Shuts Down

The Imaging Wire recently learned that MaxQ AI has stopped commercial operations, representing arguably the biggest consolidation event in imaging AI’s young history.

About MaxQ AI – The early AI trailblazer (founded in 2013) is best known for its Accipio ICH & Stroke triage platform and its solid list of channel partners (Philips, Fujifilm, IBM, Blackford, Nuance, and a particularly strong alliance w/ GE). 

About the Shutdown – MaxQ has officially stopped commercial operations and let go of its sales and marketing workforce. However, it’s unclear whether MaxQ AI is shutting down completely, or if this is part of a strategic pivot or asset sale.

Shutdown Impact – MaxQ’s commercial shutdown leaves its Accipio channel partners and healthcare customers without an ICH AI product (or at least one fewer ICH product), while creating opportunities for its competitors to step in (e.g., Qure.ai, Aidoc, Avicenna.ai). 

A Consolidation Milestone – MaxQ AI’s commercial exit represents the first of what could prove to be many AI vendor consolidations, as larger AI players grow more dominant and funding runways become shorter. In fact, MaxQ AI might fit the profile of the type of AI startups facing the greatest consolidation threat, given that it operated within a single highly-competitive niche (at least six ICH AI vendors) that’s been challenged to improve detection without slowing radiologist workflows. 

The Takeaway – It’s never fun covering news like this, but MaxQ AI’s commercial shutdown is definitely worth the industry’s attention. The fact is, consolidation happens in every industry and it could soon play a larger role in imaging AI.

Note: MaxQ AI’s shutdown unfortunately leaves some nice, talented, and experienced imaging professionals out of a job. Imaging Wire readers who are building their AI teams, should consider reaching out to these folks.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!