We’re dedicating today’s Top Story to the people and publications that I rely on to find the most interesting medical imaging stories. Assuming that you already subscribe to The Imaging Wire, these are the 35 other newsletters, websites, blogs, and accounts to follow if you want to know what’s happening in radiology.
I’ll always check the mainstream radiology news websites (Aunt Minnie, Health Imaging, et al.) and the major medical imaging journals (RSNA, EJR, JACR, etc.), but in order to find news that you won’t see elsewhere and understand how it impacts radiology, the juiciest stories usually come from the people of medical imaging.
- Brian Casey Insights – Brian is the GOAT of radiology news and he has some big stuff coming up.
- AI for Radiology – Kicky will get you caught up on AI quickly, and she’s an actual radiology AI insider.
- Signify Research – Home of the best radiology analysis, backed by actual market data.
- Dr. Lauren Oakden-Rayner – My favorite radiology blogger. Her posts don’t just cover the news, they are the news.
- Ben White, MD – Excellent insights into the business of being a working radiologist.
- Aunt Minnie Forums – The Rads on the AM Forums can get nasty, but their insights are nice.
- Hardian Health – A must read if you’re trying to navigate AI regulatory issues.
- PACSMan – When Mike sent us our first hatemail I was pumped that someone was reading, and his Aunt Minnie editorials keep us pumped.
- Tom Greeson – Tom’s ReedSmith posts are a great way to know which radiology stories are actually significant and why.
The Best Radiology Social Media “Influencers” to Follow
Nowadays the juiciest news isn’t even published, it’s posted. And it’s often posted by these legends of radiology social media.
The Best Healthcare Newsletters and Sites
It can be pretty comfy inside the radiology news bubble, but imaging is just one part of healthcare. That’s what makes these newsletters and websites from outside the reading room so important.
If you want to stay informed about radiology news and know what’s going on across healthcare, these sources will give you everything you need. You can also join over 10k medical imaging lifers and sign up for The Imaging Wire and we’ll do it for you.
PS – If there’s any radiology publications or healthcare news sources that should be on this list, let me know!
There’s plenty of bold forecasts about imaging AI’s long term potential, but short term projections of when AI startups will reach profitability are rarely disclosed and almost never bold. That’s why RadNet’s quarterly investor calls are proving to be such a valuable bellwether for the business of AI, and its latest briefing was no exception.
RadNet entered the AI arena with its 2020 acquisition of DeepHealth (~$20M) and solidified its AI presence in early 2022 by acquiring Aidence and Quantib (~$85M), but its AI business generated just $4.4M in revenue and booked a $24.9M in pre-tax loss in 2022.
Those numbers are likely typical for similar-sized AI companies. However, RadNet’s path towards AI revenue growth and breakeven operations might outpace most of its peers.
- Looking into 2023, RadNet forecasts that its AI revenue will quadruple to between $16M and $18M, while its Adjusted EBITDA loss falls to between -$9M and -$11M.
- By 2024, RadNet expects its AI division to generate at least $25M to $30M in revenue, allowing it to achieve AI profitability for the first time.
So how exactly is RadNet going to achieve 6x AI revenue growth and reach profitability within just two years? Patients are going to pay for it.
RadNet expects its new direct-to-patient Enhanced Breast Cancer Detection (EBCD) service to generate between $11M and $13M in 2023 revenue, representing up to 72% of RadNet’s overall AI revenue and driving much of its AI profitability improvements. And EBCD’s nationwide rollout won’t be complete until Q3.
RadNet’s 2024 AI revenue and profit improvements will again rely on “substantial” EBCD growth, with some help from its Aidence and Quantib operations. Those improvements would offset delayed AI efficiency benefits that RadNet has “yet to really realize” due in part to slow radiologist adoption.
The fact that RadNet expects to become one of imaging’s largest and most profitable AI companies within the next two years might not be surprising. However, RadNet’s reliance on patient payments to drive that growth is astounding, and it’s something to keep an eye on as AI vendors and radiology groups work on their own AI monetization strategies.
We hear a lot about AI’s potential to expand ultrasound to far more users and clinical settings, and a new study out of Singapore suggests that ultrasound’s AI-driven expansion might go far beyond what many of us had in mind.
The PANES-HF trial set up a home-based echo heart failure screening program that equipped a team of complete novices (no experience with echo, or in healthcare) with EchoNous’s AI-guided handheld ultrasound system and Us2.ai’s AI-automated echo analysis and reporting solution.
After just two weeks of training, the novices performed at-home echocardiography exams on 100 patients with suspected heart failure, completing the studies in an average of 11.5 minutes per patient.
When compared to the same 100 patients’ NT-proBNP blood test results and reference standard echo exams (expert sonographers, cart-based echo systems, and cardiologist interpretations), the novice echo AI pathway…
- Yielded interpretable results in 96 patients
- Improved risk prediction accuracy versus NT-proBNP by 30%
- Detected abnormal LVEF <50% scans with an 0.880 AUC (vs. NT-proBNP’s 0.651-0.690 AUCs)
- Achieved good agreement with expert clinicians for LVEF<50% detection (k=0.742)
These findings were strong enough for the authors to suggest that emerging ultrasound and AI technologies will enable healthcare organizations to create completely new heart failure pathways. That might start with task-shifting from cardiologists to primary care, but could extend to novice-performed exams and home-based care.
Considering the rising prevalence of heart failure, the recent advances in HF treatments, and the continued sonographer shortage, there’s clearly a need for more accessible and efficient echo pathways — and this study is arguably the strongest evidence that AI might be at the center of those new pathways.
The funniest physician on the internet, ophthalmologist and comedian Dr. Glaucomflecken, sparked quite a debate over private equity’s healthcare impact last week with this banger of a Valentine’s Day tweet:
“Every physician who sells their practice to private equity is choosing to make health care worse for everybody. I hope the money helps you sleep at night, because you have made life worse for every single patient and employee walking into your PE Daddy’s practice.”
Within three days, Dr. Glaucomflecken’s attack on healthcare PE garnered 1.2M views, 1,150 retweets, and 12k likes, while inspiring some telling conversations about private equity’s impact on radiology.
RadTwitter’s many private equity critics…
- Celebrated one of their biggest concerns gaining viral attention
- Warned that this trend is putting MBAs in control of patient care
- Theorized that PE is “driving physician satisfaction into the ground”
- Highlighted PE-backed rad practices’ staffing/retention challenges
- Joked that Dr. Glaucomflecken is now uninvited from the ACR meeting
Meanwhile, a few brave radiology PE leaders and defenders….
- Countered that Dr. Glaucomflecken’s post was unfairly broad
- Emphasized the challenges that private practices face on their own
- Reasoned that health systems just are as money-driven, and worse at leading practices
- Contended that PE improves radiology access in rural areas (others disputed this)
- Inferred that PE is “in the arena” working to improve care, while critics sit on the sidelines
The hundreds of other comments from non-radiologists in the Dr. Glaucomflecken thread made many of the same arguments about their specialties, while revealing an overall consensus that the healthcare incentive system is flawed, insurer influence is playing a big role in practice consolidation, and many physician practices aren’t in a position to exclusively sell to physician owned/led organizations.
Regardless where you stand in the healthcare private equity debate, Dr. Glaucomflecken’s Twitter responses make it very clear that providers are concerned about the state of U.S. healthcare economics. That same discussion thread also might contain more ideas about areas that the U.S. healthcare system should improve than any published report we’ll cover this year.
Lumitron Technologies secured another $20M in funding to expand its manufacturing and commercialization capabilities as it works its way to a $1B-plus IPO and the launch of what it calls the biggest breakthrough in the history of X-ray technology.
Lumitron’s HyperVIEW EBCS imaging system boasts 100x greater image resolution and 100x lower radiation exposure than CT, while matching the size and price tag of a current higher-end CT scanner.
- The HyperVIEW EBCS’ ability to image at the cellular level could also support next-gen “flash radiotherapies” that directly target cancerous cells.
Lumitron is clearly bullish about its HyperVIEW EBCS scanner, forecasting that it will be used in “every aspect of medicine” and an array of industrial applications.
- The HyperVIEW’s rollout schedule is equally ambitious, targeting use at research universities and hospitals within the next year and clinical readiness within just two years.
Skeptics might find plenty of reasons to question whether Lumitron can actually achieve these lofty goals. For starters, Lumitron lists just four employees on LinkedIn, the general public has only seen artistic renderings of the HyperVIEW scanner, and launching a completely new modality might be one of the most challenging acts in the business of medical imaging.
- However, Lumitron also comes with plenty of credibility. The company was founded by well-established medtech and research leaders, its technology was developed at the famous Lawrence Livermore National Laboratory, and it now has $20M to fund its next steps.
We cover groundbreaking new imaging technologies all the time, but it’s exceptionally rare for those technologies to actually approach commercialization, especially from a relatively unknown company.
Because of that lack of precedence, hospitals will need to see a ton of evidence before they start making room for their new HyperVIEW scanners. However, if they truly outperform modern CTs by 100x (with the same price and footprint), the Lumitron HyperVIEW might actually prove to be the biggest breakthrough in the history of X-ray.
GE HealthCare took a major step towards expanding its ultrasound systems to new users and settings, acquiring AI guidance startup Caption Health.
GE plans to integrate Caption’s AI guidance technology into its ultrasound platform, starting with POCUS devices and echocardiography exams. GE specifically emphasized how its Caption integration will help streamline echo adoption among novice operators and bring heart failure exams into “doctors’ offices, the home, and alternate sites of care.”
- That’s particularly notable given healthcare’s major shift outside of hospital walls, especially considering that Caption has already developed a unique home echo exam and virtual diagnosis service.
- It’s also another sign that GE sees big potential for at-home ultrasound, coming less than a year after investing in home maternity ultrasound startup Pulsenmore.
GE didn’t disclose the tuck-in acquisition’s value. However, Caption is relatively large for an AI startup (79 employees on LinkedIn, >$62M raised) and is arguably the most established company in the ultrasound guidance segment (FDA & CE approved, CMS-reimbursed, notable alliances).
- The fact that GE HealthCare has already made two acquisitions since spinning off in early January (after a 16 month pause) also suggests that the newly-independent medtech giant has returned to M&A mode.
Of course, the acquisition is another sign that the imaging AI consolidation trend remains in full swing, marking at least the ninth AI startup acquisition since January 2022 and the third so far in 2023.
- One contributor to that AI consolidation surge appears to be ultrasound hardware vendors acquiring AI guidance companies, noting that GE’s Caption acquisition comes about six months after Exo’s acquisition of Medo AI.
Ultrasound’s potential expansion to new users and clinical settings could create the kind of growth that most modalities only experience once in their lifetime (or never experience), and ease of use might dictate how far ultrasound is able to expand. That could make this acquisition particularly significant for GE HealthCare and for ultrasound’s path towards far broader adoption.
The last week brought two high profile studies underscoring radiology NLP’s potential to improve efficiency and accuracy, showing how the language-based technology can give radiologists a reporting head-start and allow them to enjoy the benefits of AI detection without the disruptions.
AI + NLP for Nodule QA – A new JACR study detailed how Yale New Haven Hospital combined AI and NLP to catch and report more incidental lung nodules in emergency CT scans, without impacting in-shift radiologists. The quality assurance program used a CT AI algorithm to detect suspicious nodules and an NLP tool to analyze radiology reports, flagging only the cases that AI marked as suspicious but the NLP tool marked as negative.
- The AI/NLP program processed 19.2k CT exams over an 8-month period, flagging just 50 cases (0.26%) for a second review.
- Those flagged cases led to 34 reporting changes and 20 patients receiving follow-up imaging recommendations.
- Just as notably, this semi-autonomous process helped rads avoid “thousands of unnecessary notifications” for non-emergent nodules.
NLP Auto-Captions – JAMA highlighted an NLP model that automatically generates free-text captions describing CXR images, streamlining the radiology report writing process. A Shanghai-based team trained the model using 74k unstructured CXR reports labeled for 23 different abnormalities, and tested with 5,091 external CXRs alongside two other caption-generating models.
- The NLP captions reduced radiology residents’ reporting times compared to when they used a normal captioning template or a rule-based captioning model (283 vs. 347 & 296 seconds), especially with abnormal exams (456 vs. 631 & 531 seconds).
- The NLP-generated captions also proved to be most similar to radiologists’ final reports (mean BLEU scores: 0.69 vs. 0.37 & 0.57; on 0-1 scale).
These are far from the first radiology NLP studies, but the fact that these implementations improved efficiency (without sacrificing accuracy) or improved accuracy (without sacrificing efficiency) deserves extra attention at a time when trade-offs are often expected. Also, considering that everyone just spent the last month marveling at what ChatGPT can do, it might be a safe bet that even more impressive language and text-based radiology solutions are on the way.
The proactive whole-body scanning segment gained even more celebrity-driven momentum last week with the launch of Neko Health, a Sweden-based startup cofounded by Spotify CEO Daniel Ek.
Neko Health launches with the goal of improving early disease detection, thus allowing physicians to focus on preventive care, and reducing late detection’s social and economic impact.
- The $190 exams combine a 360-degree body scan, cardiovascular scans, sensors, and blood tests to collect 50M data points (“skin, heart, vessels, respiration, microcirculation and more”) that are analyzed with AI to assess patients’ unique risks.
Neko Health’s cardiovascular exam includes cardiac ultrasound (among other technologies), but its other scanners are based on “cameras, lasers, and radars,” and aren’t the type of modalities that most of you associate with whole-body scanning (no MRI or CT).
- That said, Neko’s launch prompted the same type of radiologist backlash that we typically see when new whole-body imaging companies emerge, and Neko’s exams could still lead to the cascade of follow-ups that radiologists are concerned about.
Unfortunately for those concerned radiologists, the general public pays much more attention to the rich and famous than what folks are upset about on RadTwitter, and it seems that elites love proactive whole-body exams…
- Spotify’s Daniel Ek co-founded Neko (in case you missed that part)
- Whole-body MRI startup Prenuvo is backed by some A-list investors (Apple’s Tony Fadell, Google’s Eric Schmidt, supermodel Cindy Crawford)
- AI-driven proactive MRI company Ezra’s investor list is full of execs and entrepreneurs, rather than the VCs that imaging startups typically rely on
- Whole-body scans have also been endorsed by some very influential celebrities (Oprah, Kim Kardashian, Chamath Palihapitiya, Paris Hilton, Kevin Rose)
Outside of the excellent celebrity endorsement work that Hologic has done for breast cancer screening, we don’t see that type of star power in traditional areas of medical imaging.
Neko Health largely steers clear of radiology’s turf from a modality perspective, but whole-body scanning’s recent influx of funding, innovations, and celebrity-driven awareness seem very relevant to radiology.
We spend a lot of time exploring the technical aspects of imaging AI performance, but little is known about how physicians are actually influenced by the AI findings they receive. A new Scientific Reports study addresses that knowledge gap, perhaps more directly than any other research to date.
The researchers provided 233 radiologists (experts) and internal and emergency medicine physicians (non-experts) with eight chest X-ray cases each. The CXR cases featured correct diagnostic advice, but were manipulated to show different advice sources (generated by AI vs. by expert rads) and different levels of advice explanations (only advice vs. advice w/ visual annotated explanations). Here’s what they found…
- Explanations Improve Accuracy – When the diagnostic advice included annotated explanations, both the IM/EM physicians and radiologists’ accuracy improved (+5.66% & +3.41%).
- Non-Rads with Explainable Advice Rival Rads – Although the IM/EM physicians performed far worse than rads when given advice without explanations, they were “on par with” radiologists when their advice included explainable annotations (see Fig 3).
- Explanations Help Radiologists with Tough Cases – Radiologists gained “limited benefit” from advice explanations with most of the X-ray cases, but the explanations significantly improved their performance with the single most difficult case.
- Presumed AI Use Improves Accuracy – When advice was labeled as AI-generated (vs. rad-generated), accuracy improved for both the IM/EM physicians and radiologists (+4.22% & +3.15%).
- Presumed AI Use Improves Expert Confidence – When advice was labeled as AI-generated (vs. rad-generated), radiologists were more confident in their diagnosis.
This study provides solid evidence supporting the use of visual explanations, and bolsters the increasingly popular theory that AI can have the greatest impact on non-experts. It also revealed that physicians trust AI more than some might have expected, to the point where physicians who believed they were using AI made more accurate diagnoses than they would have if they were told the same advice came from a human expert.
However, more than anything else, this study seems to highlight the underappreciated impact of product design on AI’s clinical performance.
A Cedars-Sinai-led team developed an echocardiography AI model that was able to accurately assess coronary artery calcium buildup, potentially revealing a safer, more economical, and more accessible approach to CAC scoring.
The researchers used 1,635 Cedars-Sinai patients’ transthoracic echocardiogram (TTE) videos paired with their CT-based Agatston CAC scores to train an AI model to predict patients’ CAC scores based on their PLAX view TTE videos.
When tested against Cedars-Sinai TTEs that weren’t used for AI training, the TTE CAC AI model detected…
- Zero CAC patients with “high discriminatory abilities” (AUC: 0.81)
- Intermediate patients “modestly well” (≥200 scores; AUC: 0.75)
- High CAC patients “modestly well” (≥400 scores; AUC: 0.74)
When validated against 92 TTEs from an external Stanford dataset, the AI model similarly predicted which patients had zero and high CAC scores (AUCs: 0.75 & 0.85).
More importantly, the TTE AI CAC scores accurately predicted the patients’ future risks. TTE CAC scores predicted one-year mortality similarly to CT CAC scores, and they even improved overall prediction of low-risk patients by downgrading patients who had high CT CAC scores and zero TTE CAC scores.
CT-based CAC scoring is widely accepted, but it isn’t accessible to many patients, and concerns about its safety and value (cost, radiation, incidentals) have kept the USPSTF from formally recommending it for coronary artery disease surveillance. We’d need a lot more research and AI development efforts, but if TTE CAC AI solutions like this prove to be reliable, it could make CAC scoring far more accessible and potentially even more accepted.