IBM Sells Watson Health

IBM is selling most of its Watson Health division to private equity firm Francisco Partners, creating a new standalone healthcare entity and giving both companies (IBM and the former Watson Health) a much-needed fresh start. 

The Details – Francisco Partners will acquire Watson Health’s data and analytics assets (including imaging) in a deal that’s rumored to be worth around $1B and scheduled to close in Q2 2022. IBM is keeping its core Watson AI tech and will continue to support its non-Watson healthcare clients.

Francisco’s Plans – Francisco Partners seems optimistic about its new healthcare company, revealing plans to maintain the current Watson Health leadership team and help the company “realize its full potential.” That’s not always what happens with PE acquisitions, but Francisco Partners has a history of growing healthcare companies (e.g. Availity, Capsule, GoodRx, Landmark Health) and there are a lot of upsides to Watson Health (good products, smart people, strong client list, a bargain M&A multiple, seems ideal for splitting up).

A Necessary Split – Like most Watson Health stories published over the last few years, news coverage of this acquisition overwhelmingly focused on Watson Health’s historical challenges. However, that approach seems lazy (or at least unoriginal) and misses the point that this split should be good news for both parties. IBM now has another $1B that it can use towards its prioritized hybrid cloud and AI platform strategy, and the new Watson Health company can return to growth mode after several years of declining corporate support.

Imaging Impact – IBM and Francisco Partners’ announcements didn’t place much focus on Watson Health’s imaging business, but it seems like the imaging group will also benefit from Francisco Partners’ increased support and by distancing itself from a brand that’s lost its shine. Even losing the core Watson AI tech should be ok, given that the Merge PACS team has increasingly shifted to a partner-focused AI strategy. That said, this acquisition’s true imaging impact will be determined by where the imaging group lands if/when Francisco Partners decides to eventually split up and sell Watson Health’s various units.

The Takeaway – The IBM Watson Health story is a solid reminder that expanding into healthcare is exceptionally hard, and it’s even harder when you wrap exaggerated marketing around early-stage technology and high-multiple acquisitions. Still, there’s plenty of value within the former Watson Health business, which now has an opportunity to show that value.

Duke’s Interpretable AI Milestone

A team of Duke University radiologists and computer engineers unveiled a new mammography AI platform that could be an important step towards developing truly interpretable AI.

Explainable History – Healthcare leaders have been calling for explainable imaging AI for some time, but explainability efforts have been mainly limited to saliency / heat maps that show what part of an image influenced a model’s prediction (not how or why).

Duke’s Interpretable Model – Duke’s new AI platform analyzes mammography exams for potentially cancerous lesions to help physicians determine if a patient should receive a biopsy, while supporting its predictions with image and case-based explanations. 

Training Interpretability – The Duke team trained their AI platform to locate and evaluate lesions following a process that human radiology educators and students would utilize:

  • First, they trained the AI model to detect suspicious lesions and to ignore healthy tissues
  • Then they had radiologists label the edges of the lesions
  • Then they trained the model to compare those lesion edges with lesion edges from an archive of images with confirmed outcomes

Interpretable Predictions – This training process allowed the AI model to identify suspicious lesions, highlight the classification-relevant parts of the image, and explain its findings by referencing confirmed images. 

Interpretable Results – Like many AI models, this early version could not identify cancerous lesions as accurately as human radiologists. However, it matched the performance of existing “black box” AI systems and the team was able to see why their AI model made its mistakes.

The Takeaway

It seems like concerns over AI performance are growing at about the same pace as actual AI adoption, making explainability / interpretability increasingly important. Duke’s interpretable AI platform might be in its early stages, but its use of previous cases to explain findings seems like a promising (and straightforward) way to achieve that goal, while improving diagnosis in the process.

AI Disparity Detection

Most studies involving imaging AI and patient race/ethnicity warn that AI might exacerbate healthcare inequalities, but a new JACR study outlines one way that imaging AI could actually improve care for typically underserved patients.

The AI vs. EHR Disparity Problem – The researchers used a deep learning model to detect atherosclerotic disease in CXRs from two cohorts of COVID-positive patients: 814 patients from a suburban ambulatory center (largely White, higher-income) and 485 patients admitted at an inner-city hospital (largely minority, lower-income). 

When they validated the AI predictions versus the patients’ EHR codes they found that:

  • The AI predictions were far more likely to match the suburban patients’ EHR codes than the inner-city patients’ EHR codes (0.85 vs. 0.69 AUCs)
  • AI/EHR discrepancies were far more common among patients who were Black or Hispanic, prefer a non-English language, and live in disadvantaged zip codes

The Imaging AI Solution – This study suggests healthcare systems could use imaging AI-based biomarkers and EHR data to flag patients that might have undiagnosed conditions, allowing them to get these patients into care and identify/address larger systemic barriers. 

The Value-Based Care Justification – In addition to healthcare ethics reasons, the authors noted that imaging/EHR discrepancy detection could become increasingly financially important as we transition to more value-based models. AI/EHR analytics like this could be used to ensure at-risk patients are cared for as early as possible, healthcare disparities are addressed, and value-based metrics are achieved.

The Takeaway – Over the last year we’ve seen population health incidental detection emerge as one of the most exciting imaging AI use cases, while racial/demographic bias emerged as one of imaging AI’s most troubling challenges. This study managed to combine these two topics to potentially create a new way to address barriers to care, while giving health systems another tool to ensure they’re delivering value-based care.

MD Anderson’s Lung Cancer Blood Test

MD Anderson researchers developed a blood and risk-based test that could improve how we identify lung cancer screening candidates, potentially bringing more high-risk patients into screening while keeping more low-risk patients out.

The Blood + Risk Test – The test combines MD Anderson’s blood-based protein biomarker test with a lung cancer risk model that analyzes patient smoking history (the PLCOm2012 model). This combined test would be used to identify patients who should enroll in LD-CT screening programs.

The Study – MD Anderson researchers used the test to analyze 10k blood samples from 2,745 people with a +10 pack-year smoking history (including 1,299 samples from 552 people who developed cancer), finding that the blood + risk test:

  • Identified 105 of the 119 people diagnosed with cancer within one year
  • Beat the USPSTF 2021 criteria’s sensitivity (88.4% vs. 78.5%) and specificity (56.2% vs. 49.3%)
  • Identified 9.2% more lung cancer cases than the USPSTF criteria
  • Referred 13.7% fewer unnecessary screening patients than the USPSTF criteria

Blood-Based Momentum – Blood-based tests appear to be gaining momentum as a first-line cancer screening method, as the last 6 months brought a promising new MGH lung cancer test and a key validation milestone for the multi-cancer early detection blood test (MCED; detects 50 types of cancer).

The Takeaway – Although there’s still more research to be done, blood-based tests could bring more high-risk patients into LD-CT lung cancer screening programs, while reducing screening participation among patients who don’t actually need it. In other words, blood tests like these could address lung cancer screening’s two biggest challenges.

The AMA’s Quantitative Codes

The American Medical Association issued new CPT III codes for CT and MRCP quantification (see page 4), representing key milestones for quantitative and incidental/population health imaging.

CT Quantification Codes – The AMA’s 2022 CPT III update includes two new codes for quantitative CT tissue characterization, interpretation, and reporting. These codes could be a big deal for AI firms and radiology departments hoping to launch CT-based population health solutions (and eventually bill for them), such as Nanox AI’s HealthCCSng CAC scoring product and UCSF’s automated CAC scoring system. They also come just six months after the AMA added a similar CPT III code for using AI to automatically detect vertebral fractures in existing CT scans (covering Nanox AI’s VCF solution).

MRCP Quantification Codes – The AMA also added CPT III codes for quantitative magnetic resonance cholangiopancreatography (QMRCP) interpretation and reporting. These codes are a solid step for MRCP quantification products like Perspectum’s MRCP+ and could help drive adoption for this far less-subjective MRCP interpretation method. In the process, it might even change ERCP’s role to a purely therapeutic procedure.

About CPT III Codes – Since there’s often confusion about CPT III codes, it’s worth noting that they are intended to help collect clinical data for emerging technologies / procedures to support future coverage and regulatory decisions. CPT III codes don’t have assigned RVUs, so actual reimbursements would be up to payors.

The Takeaway

It’s pretty clear that the AMA is starting to see value in image quantification, AI, and incidental detection. In the last six months the AMA has issued four quantitative imaging CPT III codes, all of which directly support key imaging AI use cases (CT tissue characterization, CT vertebral fracture detection, ultrasound tissue characterization, MRI post-processing) and two that support key population health and incidental detection applications (CT tissue characterization, CT vertebral fracture detection). 

MaxQ AI Shuts Down

The Imaging Wire recently learned that MaxQ AI has stopped commercial operations, representing arguably the biggest consolidation event in imaging AI’s young history.

About MaxQ AI – The early AI trailblazer (founded in 2013) is best known for its Accipio ICH & Stroke triage platform and its solid list of channel partners (Philips, Fujifilm, IBM, Blackford, Nuance, and a particularly strong alliance w/ GE). 

About the Shutdown – MaxQ has officially stopped commercial operations and let go of its sales and marketing workforce. However, it’s unclear whether MaxQ AI is shutting down completely, or if this is part of a strategic pivot or asset sale.

Shutdown Impact – MaxQ’s commercial shutdown leaves its Accipio channel partners and healthcare customers without an ICH AI product (or at least one fewer ICH product), while creating opportunities for its competitors to step in (e.g., Qure.ai, Aidoc, Avicenna.ai). 

A Consolidation Milestone – MaxQ AI’s commercial exit represents the first of what could prove to be many AI vendor consolidations, as larger AI players grow more dominant and funding runways become shorter. In fact, MaxQ AI might fit the profile of the type of AI startups facing the greatest consolidation threat, given that it operated within a single highly-competitive niche (at least six ICH AI vendors) that’s been challenged to improve detection without slowing radiologist workflows. 

The Takeaway – It’s never fun covering news like this, but MaxQ AI’s commercial shutdown is definitely worth the industry’s attention. The fact is, consolidation happens in every industry and it could soon play a larger role in imaging AI.

Note: MaxQ AI’s shutdown unfortunately leaves some nice, talented, and experienced imaging professionals out of a job. Imaging Wire readers who are building their AI teams, should consider reaching out to these folks.

Home Ultrasound Goes Mainstream

Patients performing their own at-home ultrasound exams sounds like a pretty futuristic idea, but it’s becoming increasingly common in Israel due to a growing partnership between Clalit Health Services (Israel’s largest HMO) and DIY ultrasound startup Pulsenmore.

DIY Fertility Ultrasound – Clalit and Pulsenmore just signed an $11M agreement that will equip Clalit’s fertility treatment patients with thousands of Pulsenmore FC ultrasound systems over the next four years. The patients will use the Pulsenmore FC to perform self-exams during the IVF (in vitro fertilization) and fertility preservation processes and then transmit their scans to Clalit’s fertility clinicians. 

Pulsenmore Momentum – Pulsenmore previously provided Clalit with thousands of Pulsenmore ES fetal ultrasound systems, allowing expecting mothers to perform and transmit nearly 15k fetal ultrasounds since mid-2020. Pulsenmore also landed an interesting deal with Tel Aviv’s Sheba Medical Center in early 2021 that allowed pregnant women in Sheba’s COVID ward to perform their own fetal ultrasounds and transmit the scans to the hospital’s maternity ward.

Pulsenmore Potential – Pulsenmore’s early momentum is certainly helped by Israel’s unique healthcare system, but the company also has a European CE Mark (for the ES system), $40M in IPO funding, and ambitions to expand globally.

The Takeaway

The fact that thousands of ultrasounds are being used in Israeli homes shows that the home ultrasound concept has mainstream potential, and there’s a growing list of factors that could make it a reality. We’ve already seen a similar home system from Butterfly Network and a major industry trend towards smaller and easier to use ultrasounds (or even wearable), while the COVID pandemic has increasingly normalized at-home diagnostics and teleconsultations.

It will take some big changes for handheld ultrasounds to become MORE common than the stethoscope, but that idea doesn’t seem as ridiculous as it did a few years ago.

Imaging in 2022

Happy New Year, and welcome to the first Imaging Wire of 2022. For those of you working on your annual gameplans, here are some major imaging themes to keep in mind.

COVID Wave Watch – Nothing will have more influence on imaging in 2022 than how / when the COVID pandemic subsides, and how many more waves and variants emerge until we get there.

Efficiency Focus – It’s abundantly clear that imaging must become more efficient, making workflow improvements arguably the top priority for radiology teams and the folks who sell to them.

AI Matures – Imaging AI should mature at an even faster pace this year, bringing greater clinical adoption (and expectations), better workflow integration, improved use cases and business models, and the emergence of clear AI leaders. We’ll also likely see an initial wave of consolidation due to acquisitions and/or VC-prompted shutdowns.

More M&A – Imaging’s extremely active M&A climate should continue into 2022. Based on recent trends, this year’s M&A hotspots are likely to include PE-backed rad practice and imaging center acquisitions, enterprise imaging vendors adding to their tech and “ology” stacks, and more modality and solution expansions from the major OEMs.

Advanced Imaging Advancements – 2022 is shaping up to be a milestone year for MR and CT technology. On the MRI side, recent breakthroughs in magnet strength, helium requirements, portability, and image enhancement (among others) should lead to big changes in how / where MRI can be used. On the CT side, we’ll see OEMs increase their focus on achieving photon-counting CT leadership, even if most of that focus will be from their R&D and future product marketing teams in 2022.

The Patient Engagement Push – Digital patient engagement continues to gain momentum across healthcare, placing pressure on radiology teams to keep up. In 2022, that might mean getting better at radiology’s current patient engagement methods (e.g. image sharing, patient-friendly reporting, follow-up management), although patients’ expectations will likely evolve at an even faster pace.

Imaging Leaves the Hospital – A lot more imaging exams could be performed outside hospital walls in 2022, as payors continue to incentivize outpatient imaging (and image-related procedures) and at-home care continues its massive growth. 

While it’s hard to say which, if any, of these trends will be the top story of the next 12 months, it seems likely that we’re heading into another year with more big news than can fit into a seven-bullet roundup. Wishing you the best in 2022, Imaging Wire readers!

Imaging In 2021

Congrats on wrapping up a truly wild year for radiology and medical imaging, everyone. Here are some of the top storylines from the last 12 months that might explain why it felt more like 18 months.

Mid-COVID – This time last year radiology teams and vendors were preparing for a post-COVID future, but that obviously wasn’t what happened in 2021. Instead, they battled their way through a second pandemic year and accelerated some major imaging-related trends that might extend well into the future (cloud IT, portable imaging, remote reading, backlogs, burnout, tele/home care, and more).

Big Acquisitions – It might not seem like it, but 2021 included an unusually high number of industry-changing acquisitions. These acquisitions turned two imaging leaders into parts of much bigger non-imaging companies (Nuance & Microsoft; Change & UnitedHealthcare), transformed Intelerad into a top-tier PACS player (Ambra, Insignia, HeartIT, LUMEDX), created a pair of new public companies through SPAC mergers (Butterfly & Hyperfine), brought the first big AI acquisition (Zebra-Med & Nanox), gave Canon its own photon-counting detectors (Redlen), and added surgical ultrasound to GE’s portfolio (BK Medical). Of course, there were plenty of practice and imaging center acquisitions too.

AI Maturation – AI is still super young, but there were plenty of signs that it’s growing up fast. 2021 saw imaging AI make its way into far more clinical workflows and curriculums, created a wider divide between the AI leaders and the 2nd/3rd-tier players, and drove a lot more AI vendor consolidation than it might appear. 

Burnout – Burnout remained a dominant theme again this year, making workflow efficiency the top focus area for most radiology team leaders, product developers, and marketers. 

Developing World Imaging – The developing world’s lack of medical imaging is definitely not new, but it seems like imaging players started paying more attention to the half of the world that still doesn’t have enough imaging access. We saw a sustained focus on low/middle income countries from Hyperfine/Butterfly/Nanox/Qure.ai, new developing world strategies from Siemens and Fujifilm, and a major tuberculosis CXR AI endorsement from the World Health Organization.

Population Health Pivot – 2021 also brought a major increase in population health AI activity, including commercial launches from Nanox AI and Cleerly, an increased research focus from academia, and UCSF deploying an automated CAC scoring system for all chest CTs.

Detecting the Radiographically Occult

A new study published in European Heart Journal – Digital Health suggests that AI can detect aortic stenosis (AS) in chest X-rays, which would be a major breakthrough if confirmed, but will be met with plenty of skepticism until then.

The Models – The Japan-based research team trained/validated/tested three DL models using 10,433 CXRs from 5,638 patients (all from the same institution), using echocardiography assessments to label each image as AS-positive or AS-negative.

The Results – The best performing model detected AS-positive patients with an 0.83 AUC, while achieving 83% sensitivity, 69% specificity, 71% accuracy, and a 97% negative predictive value (but… a 23% PPV). Given the widespread use and availability of CXRs, these results were good enough for the authors to suggest that their DL model could be a valuable way to detect aortic stenosis.

The Response – The folks on radiology/AI Twitter found these results “hard to believe,” given that human rads can’t detect aortic stenosis in CXRs with much better accuracy than a coin flip, and considering that these models were only trained/validated/tested with internal data. The conversation also revealed a growing level of AI study fatigue that will likely become worse if journals don’t start enforcing higher research standards (e.g. external validation, mentioning confounding factors, addressing the 23% PPV, maybe adding an editorial).

The Takeaway – Twitter’s MDs and PhDs love to critique study methodology, but this thread was a particularly helpful reminder of what potential AI users are looking for in AI studies — especially studies that claim AI can detect a condition that’s barely detectable by human experts.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!