Sirona Medical Acquires Nines AI, Talent

Sirona Medical announced its acquisition of Nines’ AI assets and personnel, representing notable milestones for Sirona’s integrated RadOS platform and the quickly-changing imaging AI landscape.

Acquisition Details – Sirona acquired Nines’ AI portfolio (data pipeline, ML engines, workflow/analytics tools, AI models) and key team members (CRO, Direct of Product, AI engineers), while Nines’ teleradiology practice was reportedly absorbed by one of its telerad customers. Terms of the acquisition weren’t disclosed, although this wasn’t a traditional acquisition considering that Sirona and Nines had the same VC investor.

Sirona’s Nines Strategy – Sirona’s mission is to streamline radiologists’ overly-siloed workflows with its RadOS radiology operating system (unifies: worklist, viewer, reporting, AI, etc.), and it’s a safe bet that any acquisition or investment Sirona makes is intended to advance this mission. With that…

  • Nine’s most tangible contributions to Sirona’s strategy are its FDA-cleared AI models: NinesMeasure (chest CT-based lung nodule measurements) and NinesAI Emergent Triage (head CT-based intracranial hemorrhage and mass effect triage). The AI models will be integrated into the RadOS platform, bolstering Sirona’s strategy to allow truly-integrated AI workflows. 
  • Nine’s personnel might have the most immediate impact at Sirona, given the value/scarcity of experienced imaging software engineers and the fact that Nines’ product team arguably has more hands-on experience with radiologist workflows than any other imaging AI firm (at least AI firms available for acquisition).
  • Nine’s other AI and imaging workflow assets should also help support Sirona’s future RadOS and AI development, although it’s harder to assess their impact for now.

The AI Shakeup Angle – This acquisition has largely been covered as another example of 2022’s AI shakeup, which isn’t too surprising given how active this year has been (MaxQ’s shutdown, RadNet’s Aidence/Quantib acquisitions, IBM shedding Watson Health). However, Nines’ strategy to combine a telerad practice with in-house AI development was quite unique and its decision to sell might say more about its specific business model (at its scale) than it does about the overall AI market.

The Takeaway

Since the day Sirona emerged from stealth, it’s done a masterful job articulating its mission to solve radiology’s workflow problems by unifying its IT infrastructure. Acquiring Nines’ AI assets certainly supports Sirona’s unified platform messaging, while giving it more technology and personnel resources to try to turn that message into a reality.

Meanwhile, Nines becomes the latest of surely many imaging AI startups to be acquired, pivoted, or shut down, as AI adoption evolves at a slower pace than some VC runways. Nines’ strategy was really interesting, they had some big-name founders and advisors, and now their work and teams will live on through Sirona.

Intracranial Hemorrhage AI Efficiency

A new Radiology: Artificial Intelligence study out of Switzerland highlighted how Aidoc’s Intracranial Hemorrhage AI solution improved emergency department workflows, without hurting patient care. Even if that’s exactly what solutions like this are supposed to do, real world AI studies that go beyond sensitivity and specificity are still rare and worth some extra attention.

The Study – The researchers analyzed University Hospital of Basel’s non-contrast CT intracranial hemorrhage (ICH) exams before and after adopting the Aidoc ICH solution (n = 1,433 before & 3,017 after; ~14% ICH incidence w/ both groups).

Diagnostic Results – The Aidoc solution produced “practicable” overall diagnostic results (93% accuracy, 87.2% sensitivity, 93.9% specificity, and 97.8% NPV), although accuracy was lower with certain ICH subtypes (e.g. subdural hemorrhage 69.2%, 74/107). 

Efficiency Results – More notably, the Aidoc ICH solution “positively impacted” UBS’ ED workflows, with improvements across a range of key metrics:

  • Communicating critical findings: 63 vs. 70 minutes
  • Communicating acute ICH: 58 vs. 73 minutes
  • Overall turnaround time to rule out ICH: 164 vs. 175 minutes
  • Turnaround time to rule out ICH during working hours: 167 vs. 205 minutes

Next Steps – The authors called for further efforts to streamline their stroke workflows and to create a clear ICH AI framework, accurately noting that “AI tools are only as reliable as the environment they are deployed in.”

The Takeaway
The internet hasn’t always been kind to emergency AI tools, and academic studies have rarely focused on the workflow efficiency outcomes that many radiologists and emergency teams care about. That’s not the case with this study, which did a good job showing the diagnostic and workflow upsides of ICH AI adoption, and added a nice reminder that imaging teams share responsibility for AI outcomes.

Creating a Cancer Screening Giant

A few days after shocking the AI and imaging center industries with its acquisitions of Aidence and Quantib, RadNet’s Friday investor briefing revealed a far more ambitious AI-enabled cancer screening strategy than many might have imagined.

Expanding to Colon Cancer – RadNet will complete its AI screening platform by developing a homegrown colon cancer detection system, estimating that its four AI-based cancer detection solutions (breast, prostate, lung, colon) could screen for 70% of cancers that are imaging-detectable at early stages.

Population Detection – Once its AI platform is complete, RadNet plans to launch a strategy to expand cancer screening’s role in population health, while making prostate, lung, and colon cancer screening as mainstream as breast cancer screening.

Becoming an AI Vendor – RadNet revealed plans to launch an externally-focused AI business that will lead with its multi-cancer AI screening platform, but will also create opportunities for RadNet’s eRAD PACS/RIS software. There are plenty of players in the AI-based cancer detection arena, but RadNet’s unique multi-cancer platform, significant funding, and training data advantage would make it a formidable competitor.

Geographic Expansion – RadNet will leverage Aidence and Quantib’s European presence to expand its software business internationally, as well as into parts of the US where RadNet doesn’t own imaging centers (RadNet has centers in just 7 states).

Imaging Center Upsides – RadNet’s cancer screening AI strategy will of course benefit its core imaging center business. In addition to improving operational efficiency and driving more cancer screening volumes, RadNet believes that the unique benefits of its AI platform will drive more hospital system joint ventures.

AI Financials – The briefing also provided rare insights into AI vendor finances, revealing that DeepHealth has been running at a $4M-$5M annual loss and adding Aidence / Quantib might expand that loss to $10M- $12M (seems OK given RadNet’s $215M EBITDA). RadNet hopes its AI division will become cash flow neutral within the next few years as revenue from outside companies ramp up.

The Takeaway

RadNet has very big ambitions to become a global cancer screening leader and significantly expand cancer screening’s role in society. Changing society doesn’t come fast or easy, but a goal like that reveals how much emphasis RadNet is going to place on developing and distributing its AI cancer screening platform going forward.

IBM Sells Watson Health

IBM is selling most of its Watson Health division to private equity firm Francisco Partners, creating a new standalone healthcare entity and giving both companies (IBM and the former Watson Health) a much-needed fresh start. 

The Details – Francisco Partners will acquire Watson Health’s data and analytics assets (including imaging) in a deal that’s rumored to be worth around $1B and scheduled to close in Q2 2022. IBM is keeping its core Watson AI tech and will continue to support its non-Watson healthcare clients.

Francisco’s Plans – Francisco Partners seems optimistic about its new healthcare company, revealing plans to maintain the current Watson Health leadership team and help the company “realize its full potential.” That’s not always what happens with PE acquisitions, but Francisco Partners has a history of growing healthcare companies (e.g. Availity, Capsule, GoodRx, Landmark Health) and there are a lot of upsides to Watson Health (good products, smart people, strong client list, a bargain M&A multiple, seems ideal for splitting up).

A Necessary Split – Like most Watson Health stories published over the last few years, news coverage of this acquisition overwhelmingly focused on Watson Health’s historical challenges. However, that approach seems lazy (or at least unoriginal) and misses the point that this split should be good news for both parties. IBM now has another $1B that it can use towards its prioritized hybrid cloud and AI platform strategy, and the new Watson Health company can return to growth mode after several years of declining corporate support.

Imaging Impact – IBM and Francisco Partners’ announcements didn’t place much focus on Watson Health’s imaging business, but it seems like the imaging group will also benefit from Francisco Partners’ increased support and by distancing itself from a brand that’s lost its shine. Even losing the core Watson AI tech should be ok, given that the Merge PACS team has increasingly shifted to a partner-focused AI strategy. That said, this acquisition’s true imaging impact will be determined by where the imaging group lands if/when Francisco Partners decides to eventually split up and sell Watson Health’s various units.

The Takeaway – The IBM Watson Health story is a solid reminder that expanding into healthcare is exceptionally hard, and it’s even harder when you wrap exaggerated marketing around early-stage technology and high-multiple acquisitions. Still, there’s plenty of value within the former Watson Health business, which now has an opportunity to show that value.

Duke’s Interpretable AI Milestone

A team of Duke University radiologists and computer engineers unveiled a new mammography AI platform that could be an important step towards developing truly interpretable AI.

Explainable History – Healthcare leaders have been calling for explainable imaging AI for some time, but explainability efforts have been mainly limited to saliency / heat maps that show what part of an image influenced a model’s prediction (not how or why).

Duke’s Interpretable Model – Duke’s new AI platform analyzes mammography exams for potentially cancerous lesions to help physicians determine if a patient should receive a biopsy, while supporting its predictions with image and case-based explanations. 

Training Interpretability – The Duke team trained their AI platform to locate and evaluate lesions following a process that human radiology educators and students would utilize:

  • First, they trained the AI model to detect suspicious lesions and to ignore healthy tissues
  • Then they had radiologists label the edges of the lesions
  • Then they trained the model to compare those lesion edges with lesion edges from an archive of images with confirmed outcomes

Interpretable Predictions – This training process allowed the AI model to identify suspicious lesions, highlight the classification-relevant parts of the image, and explain its findings by referencing confirmed images. 

Interpretable Results – Like many AI models, this early version could not identify cancerous lesions as accurately as human radiologists. However, it matched the performance of existing “black box” AI systems and the team was able to see why their AI model made its mistakes.

The Takeaway

It seems like concerns over AI performance are growing at about the same pace as actual AI adoption, making explainability / interpretability increasingly important. Duke’s interpretable AI platform might be in its early stages, but its use of previous cases to explain findings seems like a promising (and straightforward) way to achieve that goal, while improving diagnosis in the process.

RSNA 2021 Reflections

The first in-person RSNA since COVID is officially a wrap. Hope you had a blast if you made it to Chicago and a productive week if you stayed home. We also hope you enjoy The Imaging Wire’s big takeaways from what might have been both the most special and most subdued RSNA ever.

Crowds & Conversations – We were already expecting 50% lower attendance than RSNA 2019, but the exhibit hall and cab lines looked more like 70% below 2019’s crowds (even less on Sunday & Wednesday). That said, most of the stronger companies had steady booth traffic and nearly every exhibitor emphasized that the attendees who did show up were ready to have high-quality conversations.

Focus on Productivity – Just about every product message at RSNA focused on productivity and efficiency, often with greater emphasis than clinical effectiveness. The modality-based efficiency enhancements seemed to be the most impactful, which is good news for technologist bandwidth and patient throughput, but might be bad news for rad burnout unless informatics/AI efficiency can catch up (it doesn’t seem like that happened this year).

Modality Milestones – The major OEMs did a good job making modalities cool again, debuting milestone innovations across both their MR (low-helium, low-field, reconstruction, coils) and CT (photon-counting, spectral, upgradability) lineups. We also saw the latest scanners take big strides in operator efficiency and patient experience. There weren’t many breakthroughs with X-ray or ultrasound, and most point-of-care ultrasound OEMs stayed home (rads aren’t their market anyway), but attendees seemed okay with that.

AI Showcase – The RSNA AI Showcase had solid traffic and high energy (especially on Mon & Tues), helped by continued AI buzz and the fact that RSNA finally let AI vendors out of the basement. The AI Showcase highlighted many of the trends we’ve been seeing all year, including larger vendors transitioning to AI platform strategies, an increased focus on workflow integration and care coordination, and a greater emphasis on radiologist efficiency. There were also far fewer brand-new AI tools than previous years, as many vendors focused on improving their current products and/or expanding their portfolio via partnerships. 

PACS Cloud Focus – PACS vendors continued to place a major emphasis on their respective cloud advantages, and there was a widespread consensus that cloud is on every imaging IT roadmap. The PACS vendors seemed to talk less about multi-ology enterprise imaging than previous years, and expanding EI beyond radiology/cardiology still seemed pretty futuristic for most players. It was also quite clear that most of the PACS players’ AI marketplaces/platforms haven’t been as prioritized as earlier announcements might have suggested.

Best RSNA Since… 2019 – We’ve heard some folks saying this was the “best RSNA ever” because it was easy to get around and it was great to see everyone, but those seem more like pandemic silver linings than “best ever” qualifications. Still, the imaging industry made the most of RSNA 2021, and everyone seemed truly happy to be together again after two long years of working from home. As long as COVID cooperates, we should be set up for an excellent RSNA 2022.

Viz.ai’s Care Coordination Expansion

Viz.ai advanced its care coordination strategy last week, launching new Pulmonary Embolism and Aortic Disease modules, and unveiling its forthcoming Viz ANX cerebral aneurysm module.

PE & Aortic Modules – The new PE and Aortic modules use AI to quickly detect pulmonary embolisms and aortic dissection in CTA scans, and then coordinate care using Viz.ai’s 3D mobile viewer and clinical communications workflows. It appears that Viz.ai partnered with Avicenna.AI to create these modules, representing a logical way for Viz.ai to quickly expand its portfolio.

Viz ANX Module – The forthcoming Viz ANX module will use the 510k-pending Viz ANX algorithm to automatically detect suspected cerebral aneurysms in CTAs, and then leverage the Viz Platform for care coordination.

Viz.ai’s Care Coordination Strategy – Viz.ai called itself “the leader in AI-powered care coordination” a total of six times in these two announcements, and the company has definitely earned this title for stroke detection/coordination. Adding new modules to the Viz Platform is how Viz.ai could earn “leadership” status across all other imaging-detected emergent conditions.

The Takeaway – Viz.ai’s stroke detection/coordination platform has been among the biggest imaging AI success stories, making its efforts to expand to new AI-based detection and care coordination areas notable (and pretty smart). These module launches are also an example of diagnostic AI’s growing role throughout care pathways, showing how AI can add clinical value beyond the reading room.

Right Diagnoses, Wrong Reasons

An AJR study shared new evidence of how X-ray image labels influence deep learning decision making, while revealing one way developers can address this issue.

Confounding History – Although already well known by AI insiders, label and laterality-based AI shortcuts made headlines last year when they were blamed for many COVID algorithms’ poor real-world performance. 

The Study – Using 40k images from Stanford’s MURA dataset, the researchers trained three CNNs to detect abnormalities in upper extremity X-rays. They then tested the models for detection accuracy and used a heatmap tool to identify the parts of the images that the CNNs emphasized. As you might expect, labels played a major role in both accuracy and decision making.

  • The model trained on complete images (bones & labels) achieved an 0.844 AUC, but based 89% of its decisions on the radiographs’ laterality/labels.
  • The model trained without labels or laterality (only bones) detected abnormalities with a higher 0.857 AUC and attributed 91% of its decision to bone features.
  • The model trained with only laterality and labels (no bones) still achieved an 0.638 AUC, showing that AI interprets certain labels as a sign of abnormalities. 

The Takeaway – Labels are just about as common on X-rays as actual anatomy, and it turns out that they could have an even greater influence on AI decision making. Because of that, the authors urged AI developers to address confounding image features during the curation process (potentially by covering labels) and encouraged AI users to screen CNNs for these issues before clinical deployment.

The False Hope of Explainable AI

Many folks view explainability as a crucial next step for AI, but a new Lancet paper from a team of AI heavyweights argues that explainability might do more harm than good in the short-term, and AI stakeholders would be better off increasing their focus on validation.

The Old Theory – For as long as we’ve been covering AI, really smart and well-intentioned people have warned about the “black-box” nature of AI decision making and forecasted that explainable AI will lead to more trust, less bias, and greater adoption.

The New Theory – These black-box concerns and explainable AI forecasts might be logical, but they aren’t currently realistic, especially for patient-level decision support. Here’s why:

  • Explainability methods describe how AI systems work, not how decisions are made
  • AI explanations can be unreliable and/or superficial
  • Most medical AI decisions are too complex to explain in an understandable way
  • Humans over-trust computers, so explanations can hurt their ability to catch AI mistakes
  • AI explainability methods (e.g heat maps) require human interpretation, risking confirmation bias
  • Explainable AI adds more potential error sources (AI tool + AI explanation + human interpretation)
  • Although we still can’t fully explain how acetaminophen works, we don’t question whether it works, because we’ve tested it extensively

The Explainability Alternative – Until suitable explainability methods emerge, the authors call for “rigorous internal and external validation of AI models” to make sure AI tools are consistently making the right recommendations. They also advised clinicians to remain cautious when referencing AI explanations and warned that policymakers should resist making explainability a requirement. 

Explability’s Short-Term Role – Explainability definitely still has a role in AI safety, as it’s “incredibly useful” for model troubleshooting and systems audits, which can improve model performance and identify failure modes or biases.

The Takeaway – It appears we might not be close enough to explainable AI to make it a part of short-term AI strategies, policies, or procedures. That might be hard to accept for the many people who view the need for AI explainability as undebatable, and it makes AI validation and testing more important than ever.

ImageBiopsy Lab & UCB’s AI Alliance

Global pharmaceutical company UCB recently licensed its osteoporosis AI technology to MSK AI startup ImageBiopsy Lab, representing an interesting milestone for several emerging AI business models.

The UCB & ImageBiopsy Lab Alliance – ImageBiopsy Lab will use UCB’s BoneBot AI technology to develop and commercialize a tool that screens CT scans for “silent” spinal fractures to identify patients who should be receiving osteoporosis treatments. The new tool will launch by 2023 as part of ImageBiopsy Lab’s ZOO MSK platform.

UCB’s AI Angle – UCB produces an osteoporosis drug that would be prescribed far more often if detection rates improve (over 2/3 of vertebral fractures are currently undiagnosed). That’s why UCB developed and launched BoneBot AI in 2019 and it’s why the pharma giant is now working with ImageBiopsy Lab to bring it into clinical use.

The PharmaAI Trend – We’re seeing a growing trend of drug and device companies working with AI developers to help increase treatment demand. The list is getting pretty long, including quite a few PharmaAI alliances targeting lung cancer treatment (Aidence & AstraZeneca, Qure.ai & AstraZeneca, Huma & Bayer, Optellum & J&J) and a diverse set of AI alliances with medical device companies (Imbio & Olympus for emphysema, Aidoc & Inari for PE, Viz.ai & Medtronic for stroke).

The Population Health AI Trend – ImageBiopsy Lab’s BoneBot AI licensing is also a sign of AI’s growing momentum in population health, following increased interest from academia and major commercial efforts from Cleerly (cardiac screening) and Zebra Medical Vision (cardiac and osteoporosis screening… so far). This alliance also introduces a new type of population health AI beneficiary (pharma companies), in addition to risk holders and patients.

The Takeaway – ImageBiopsy Lab and UCB’s new alliance didn’t get a lot of media attention last week, but it tells an interesting story about how AI business models are evolving beyond triage, and how those changes are bringing some of healthcare’s biggest names into the imaging AI arena.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!