Imaging Wire Q&A: Zebra-Med Thinks Big

With Zohar Elhanani
Zebra Medical Vision, Chief Executive Officer


The role of imaging AI continues to grow, as radiology workflows increasingly utilize these tools to prioritize patients and support diagnoses. This already represents a big change for healthcare, but it could be just the beginning of imaging AI’s far greater public health evolution that extends well beyond the radiology department and could change how and when many diseases are diagnosed.

In this Imaging Wire Q&A we sat down with Zebra Medical Vision CEO, Zohar Elhanani, to discuss Zebra-Med’s view of how imaging AI is helping healthcare today and how AI’s role in public health could be much bigger than many of us imagine.



You had a front row seat during two key periods in the medical imaging industry’s evolution. What are the major themes that connect those periods and how are they shaping imaging’s future?

I started my career in medical imaging right when we were shifting from analog to digital. My company’s products moved images between healthcare facilities, radiologists, teleradiologists, and referring physicians. That was step one of the digital evolution.

Fast forward 20 years, we’re now seeing a digital image volume evolution, as medical images are being produced, analyzed, and stored at a massive scale. Volumes have grown so much it’s been hard for radiologists to keep up.

This digital image volume growth also made imaging AI possible, which is becoming a larger part of the radiology workflow, and helping radiologists interpret images as efficiently and accurately as possible.

So for me, it’s gone full circle, from the start of the digital imaging evolution and into the imaging AI evolution.



How do you view the next phase of the AI evolution?

AI is already becoming a driving force in medical imaging diagnostics. It’s becoming commonly used across healthcare facilities and providers, and not only in radiology. This is really a tectonic shift in healthcare.

The COVID pandemic and the focus on clinical and revenue cycle efficiency has made AI much more than just a buzzword. AI is actually becoming more focused on validated use cases and generating real tangible ROI.

For Zebra-Med, as an medical imaging AI pioneer, this has been a journey. We initially targeted detection of low prevalence findings, triaging acute conditions, and improving turnaround times for radiologists. That was a very good entry point. It was a valuable way to substantiate how AI can detect abnormalities and prioritize reads.

During our AI journey, we also realized that although these are valuable use cases, they don’t necessarily always present a clear ROI. As part of our evolution, we’re now looking to expand and we’ve already introduced products targeting larger populations at scale, focusing on high prevalence, chronic conditions that have not been detected.

We feel that promoting preventative care for treatable illnesses will expand AI to broader populations and more use cases, while supporting the shift from fee-for-service models to value-based care.

We’re committed to population health AI. We’re building out our population health product offering and roadmap and we’ll introduce more solutions over time, in addition to our coronary calcium scoring and vertebral compression fracture solutions. We think that’s a path for the future and an area that AI can play a bigger role.



We don’t hear AI companies talk about population health very often. Can you tell me more about how AI supports population health?

The pathway to value-based care involves making healthcare systems more efficient and offering patients preventative care, rather than waiting for undetected diseases to get worse.

Our population health solutions focus on catching diseases that have the highest rates of morbidity and mortality. Coronary heart disease and osteoporosis are silent killers, and they get worse over time.

Radiologists don’t always note or look for these findings. Generally, someone walks in for a specific condition, like a broken rib, and incidental findings are not necessarily caught or communicated.

Our solutions yield more information from existing CT scans and EMR data. By applying these algorithms, we can spot undetected diseases and alert physicians to initiate a pathway to care that improves patient health and reduces costs for healthcare systems.

This is where the whole shift to value-based care is heading and we think that’s an area where AI and Zebra-Med could play a bigger role.



How does AI economics work for population health programs?

So obviously there are two sides.

First, there’s a revenue cycle side that involves the actual income from providing medical care. And obviously, in value based care systems, these are capitated programs.

Second, there’s the cost reduction side, achieved through early intervention and avoiding expensive care for under-treated and non-treated conditions.

So the idea is to create enough incentive for both payer and provider to look at AI as a way to reduce cost but also manage patient risk.

The radiologists need to be motivated and incented to identify and confirm these findings. So that’s one area that needs to be looked at. The medical imaging AI industry has been struggling to find the right way to make radiologists more motivated to look into findings that are different from the purpose of the original study. Zebra Med is always at the forefront of finding solutions and we aim to do that here too.



Who would be involved in evaluating and implementing AI-based population health initiatives?

In our population health projects, we generally work with chief revenue officers and chief population health officers, who look at the breadth of cost and quality of care across their population. The two have to go hand-in-hand. What is the cost and what is the quality?

There also needs to be buy-in at the point of care by the radiologist. That’s where the finding is detected. But in terms of the program as a whole, it’s orchestrated by the chief revenue officer, chief population health officer, and the chief medical officer. They prescribe the pathway to care and define what needs to trigger that pathway based on AI-detected incidental findings.



What’s the best way for these population health executives to involve the radiology department?

There needs to be some kind of economic benefit for the radiologist to take action on these findings. One incentive is obviously just quality of care and the breadth of the report itself, but a financial incentive is also required. That’s part of the equation and that’s something that needs to be sorted at the IDN level between the payer and the provider as part of a value based care paradigm.



When population health programs use imaging AI to identify incidentals at scale, follow-up management becomes really important. What’s the best way to do follow-up management in a program like this?

The emphasis here has to be on establishing pathways to care from the point that the AI and the radiologists confirm a finding. And I think that’s again part of the shift to a value-based care paradigm where these findings make their way to actual treatment, which reduces costs and improves patient care.

That’s exactly where we’re focusing our efforts in order to make sure that a finding doesn’t just stay there in the report itself. It actually triggers a call for action to take the finding to the right stakeholder at the provider level or beyond.

That’s a critical part of it. What is the pathway to care and what are the incentives around that pathway under a value-based care program or plan?



Would population health programs achieve any benefits from AI that they weren’t expecting?

Definitely. We’ve run our own tests on data to compare what’s written in the EMR and patient records, and we found many new findings that did not exist. And that’s simply by running algorithms retrospectively on existing data and substantiating the value of AI.

So definitely, the response has been very, very favorable to the fact that things go missed and are under-reported and there’s value there.

Now, the question is how to deploy that at scale and how to create the actions and the pathway to care from these detections?



Do you have any advice for healthcare systems considering using AI to support their own population health efforts?

One of our larger customers recently shared with us that his three priorities for AI-enabled population health are improving patient care, reducing liability risk, and adding financial value.

I completely agree. Combining the improvement of patient care, the financial value for the system, and reduction of liability risk is critical.

I think that’s something the industry as a whole is still looking for. How do you substantiate the value of AI in terms of the financial benefit? How does it really improve patient care as a whole? And specifically, if we look at triage solutions, how do they really impact low prevalence acute findings versus what we see in population health with high prevalence chronic illnesses?

That’s the goal for this whole pathway that we’re discussing. AI for population health isn’t here to replace referring physicians or regular checkups. It’s here to serve as kind of an early warning signal for chronic disease. That’s really the idea, serving as a safety net for any finding that exists and is not detected or is under reported. It’s another layer that would augment whatever is done by the primary care physicians or any ongoing radiologist interpretations.

Long term, obviously it provides better cost structure for the entire system and offers comprehensive preventative care for the patient. And as I said earlier, we won’t be simply looking at a handful of conditions. It will involve a longer pathway to covering many incidentals and making sure that they’re all accounted for in terms of at least knowing that they’re there and considering potential care pathways to ensure that nothing is ignored or under-treated.

As a whole, it’s another layer of detection that it doesn’t currently exist. That’s how we see AI playing a big role in the population health domain.

Imaging Wire Q&A: Arterys’ AI Journey

With John Axerio-Cilies, PhD
Arterys
CEO & Co-Founder

It’s been quite a decade for AI and cloud technology in the healthcare space, with some major milestones and learning moments along the way.

Arterys had a front seat view for many of these milestones, as the industry’s first cloud-native imaging AI developer and one of the only companies that serves as both an AI developer and a multi-vendor AI marketplace platform provider.

In this Imaging Wire Q&A we sat down with Arterys CEO and co-founder, John Axerio-Cilies, PhD, to discuss medical imaging’s AI and cloud evolution and how Arterys works with its Center of Excellence partners to make AI real.



Arterys’ 10-year anniversary makes you imaging AI veterans. What are some of the key milestones you’ve witnessed during this journey?

When we started back in 2011, imaging AI as we know it wasn’t really a thing.

At the time you could say launching Arterys was a leap of faith, based on our vision for patient-driven insights, data-driven medicine, and a commitment to the cloud.

AI and machine learning have been around for decades, and some imaging vendors began exploring machine learning-like approaches in the early 2000s. However, back in 2011, the forward-looking part of the healthcare industry was mainly focused on precision health and big data. Those were the key buzzwords and cloud wasn’t even part of the equation.

It was still extremely early for cloud, especially in healthcare. Early on, we even had an IT leader at a major academic medical center tell us that they would “never do cloud.” It took 10 years, but now everyone who’s educated on the subject recognizes cloud’s benefits, even the IT team from that same academic medical center. At that same center if it’s not cloud enabled, they will not consider it.

Imaging AI as we know it primarily got its start because of deep learning, beginning with a key paper that came out in 2012, and leading up to the current surge in industry interest that started in 2017. We’re now seeing the AI hype curve slow down into more of a reality. There haven’t been any monumental events that immediately changed the way people in healthcare think about AI. Instead, imaging AI is slowly going through the expected adoption cycles, making its way from early adopters and towards the laggards.



How often do you come across radiologists who are concerned that AI will endanger their job?

I rarely hear concerns that AI could eliminate radiologists’ jobs among the physicians who I work with, but these folks are already interested in AI. Still, there is certainly a pool of radiologists who are concerned that AI might replace them.

However, this happens in every industry and it’s still very early in AI’s evolution. I think it’s going to be decades before AI would realistically challenge radiologist’s current jobs, plus radiologists’ jobs are going to keep evolving along with AI.

It’s been far more common to see radiologists realize that AI can accelerate their workflow and help them day-to-day, and we expect that trend to continue.



How has Arterys’ own AI platform evolved?

In the last few years we’ve opened up our platform to support more and more use cases, including more modalities and more top service lines. The most notable expansions have been in cardiovascular imaging, neuroimaging, women’s health, and within acute and X-ray service lines.

We had to add more functionality to our underlying platform in order to support this portfolio expansion, opening up our platform to the point that it’s almost self-serve. By opening the platform we’ve seen expanded adoption from not only our 40+ AI vendor partners but also healthcare institutions using the Arterys platform to deploy, share, and refine their own AI tools, fully in their control.

That’s exciting because we want to have an entire ecosystem to support early innovators, researchers, and academic medical centers. Even larger IDNs have strategic initiatives around AI and are funding researchers to develop AI models that they want to integrate.

We want to help support that evolution and push these models from research into clinical practice and to ultimately become commercial products. We’re helping manage that too, because we have folks that can help for regulatory support, commercial go-to-market support, and the entire commercialization trajectory.

We also continue to refine these products and their implementations, thanks in part to our Center of Excellence program.



What inspired you to create Arterys’ Center of Excellence program?

The imaging industry lacks clinical evidence, and the data to prove the value proposition of products. Healthcare marketing folks say all these grandiose statements but when you double-click on these statements, there’s often not a lot of data to support them.

This is also where most AI providers and users are lacking, and it’s the reason we created the Center of Excellence program.

Through the Center of Excellence program, we work with our major medical institution customers to go one step deeper and make sure their AI adoption is happening and it’s actually impacting whatever needs to be impacted. These improvement targets usually include patient outcomes and efficiency, so we’re often trying to create an infrastructure that solves both of those problems.

With our Centers of Excellence we translate actual data to show clients how we were able to accomplish their AI goals because we worked with them to change their workflow and helped them guide behavioral changes.

AI success is so much more than a working product. People talk about AUC, sensitivity, and specificity, but that’s less than 5% of the problem. You still need to have the infrastructure and the clinical workflow and the behavioral change to adopt this stuff.



Who would be involved in a Center of Excellence partnership?

Every partnership starts with clinical users, but the things we measure and improve would be very different depending on the specific product, its users, and the organization.

For example, our X-ray product targets ED physicians and to a lesser extent radiologists, giving
them a tool to quickly triage, treat, or discharge patients. We’d work with that partner to confirm that the X-ray solution actually improves outcomes and helps treat patients faster.

It’s very different with our cardiac product, which is used by cardiologists and radiologists, and is absolutely required for diagnosis. With these partners, we’d work with them to help confirm that the cardiac product works as needed.

In any scenario, the clinical users would be a starting point but we’d also work closely with senior leadership like CIOs, CMIOs, and CFOs to make sure institutional goals are being met.

We’re actively looking for more Center of Excellence partners, especially partners in the neuroimaging and in the oncology space.


How are your Center of Excellence partners’ improvements communicated?

We’ve done a good job making this as non-intrusive as possible. Because we’re completely cloud-based we can usually integrate with our partners in a few minutes, and we can also collect more detailed clinical information for partners interested in understanding their patients’ pre- and post-imaging pathways.

We provide Center of Excellence partners with all outputs from each patient session to any of their imaging IT or EMR platforms, allowing them to monitor and analyze their progress.


Can you tell me about your most successful Center of Excellence partners?

The most successful Centers of Excellence really care about making AI real and they are willing to dive in, run assessments, and perform trials to make sure that we’re actually impacting whatever we set out to improve. UMass Memorial Health Care here in the U.S.A. and Centre Hospitalier de Valenciennes in France are a couple great examples of sites who are doing this.

These most successful Centers of Excellence truly had clinical pain points that hurt bad enough for them to make solving them a priority. For example, we’ve had some partners who kept ED patients waiting for X-ray results for hours or had to discharge patients without their results. That’s a massive pain point and it’s enough to make hospitals get serious about finding solutions.

The opposite of that is hospitals who say, “oh, let’s get AI in here” but aren’t sure about what’s their clinically unmet need or if they even have one. The fact that these hospitals don’t know where they can improve suggests that they have a lot of ways to improve, but they have to identify these challenges and commit to addressing them before they are ready to become a Center of Excellence.


What should healthcare institutions ask themselves when considering being a Center of Excellence?

The first thing they should ask themselves is if they are committed to making AI real. I think that’s a really important question. Because if they are not, and they’re not truly invested in actually helping the patients or improving workflow, that’s not an ideal candidate. I don’t care about marking an AI adoption checkbox. What I care about is working with our partners to make their AI adoption impactful.

Potential partners should also understand their goals and confirm that they are ready to work together to achieve those goals, because many improvements come from outside of the software, and continuous improvement is a collaborative process.


About Arterys

Arterys is the market leader and the world’s first internet platform for medical imaging. Its objective is to transform healthcare by transforming radiology. The Arterys platform is 100% web-based, AI-powered, and FDA-cleared, unlocking simple clinical solutions.

Winners Announced for 2020 Imaging Wire Awards

The Imaging Wire is thrilled to announce the winners of the 2020 Imaging Wire Awards, honoring this year’s most outstanding contributors to radiology.

The following Imaging Wire Award winners were nominated by their peers and selected by a panel of judges for their efforts to evolve radiology and improve the lives of clinicians and patients:


COVID Hero: Byron Christie, MD; Associate Chief Medical Officer of Integrations, Radiology Partners

When the pandemic hit, Dr. Christie and nine radiologists from RP’s SEAL team traveled across the U.S. to provide care in hard hit regions. After recovering from a COVID-19 infection that he contracted while treating patients in Florida, Dr. Christie increased his efforts to fight COVID-19 through his work at RP, continued plasma donations, and by educating medical students.


Diagnostic Humanitarian: Daniel J. Mollura, MD; President and CEO, RAD-AID International

Dr. Mollura is the Founder and CEO of RAD-AID International, a nonprofit organization dedicated to expanding radiology care to underserved and resource-poor communities. Over the last 12 years, Dr. Mollura grew RAD-AID to nearly 14,000 members serving over 80 hospitals in 35 countries. Among many accomplishments this year, RAD-AID’s residency program in Guyana will graduate its first class of radiologists.


AI Activator: Jon T. DeVries, CEO; Qlarity Imaging

Under Jon’s leadership, Qlarity Imaging has made significant progress developing the company’s QuantX software, which integrates images from multiple modalities to assist radiologists in the assessment and characterization of breast abnormalities. DeVries continues to expand QuantX’s capabilities and market reach with an innovative approach to product development and partnerships.


Burnout Fighter: Marla B.K. Sammer, MD; Associate Professor of Pediatric Radiology, Texas Children’s Hospital

Faced with Texas Children’s Hospital’s massive imaging volume growth, Dr. Sammer introduced a new initiative to optimize workflow, balance distribution across teams, and improve radiologists’ workdays. These changes reduced Texas Children’s average turnaround for X-rays by 25% and other modalities by over 27%, while helping its radiologists reliably predict their workday, fostering a sense of fairness and control, and reducing burnout.


Insights to Action: Syed Zaidi, MD, MBA; Associate Chief Medical Officer for Integrations, Radiology Partners

Dr. Zaidi has consistently tackled imaging waste throughout his career, participating in Choosing Wisely and serving as a leader in the ACR’s Imaging 3.0 initiative. Dr. Zaidi also developed a utilization management program at his local hospital to limit unnecessary chest CT scans for pulmonary embolism, while helping to roll out a best practice recommendations program across Radiology Partners.


Continued Care: Jinghong Li, MD, PhD, University of California San Diego

Dr. Jinghong Li is an attending physician and associate professor specialized in pulmonary diseases and critical care at University of California San Diego. While caring for COVID-19 patients in UCSD’s ICU, Dr. Li also worked with engineers and scientists to develop a wearable ultrasonic patch to allow continuous bedside ultrasound monitoring. This patch would alleviate infection control concerns associated with manual bedside imaging, while helping predict respiratory failure due to COVID-19 pneumonia.


Cornerstone: Karen Holzberger, SVP and GM; Healthcare Diagnostics, Nuance Communications

As the leader of Nuance’s healthcare diagnostics team, Karen’s top focus is to drive innovations that advance the practice of radiology. That was on display this year, as Ms. Holzberger led the development of new capabilities that prioritize and add insights to COVID-related exams, delivered on Nuance’s promise to enable “AI at scale” through the Nuance AI Marketplace, and continued to enhance PowerScribe One.


Diversity & Inclusion: Kristina Elizabeth Hawk, MD; President, Matrix East Pod A, Radiology Partners

Dr. Kristina Elizabeth Hawk is a founding member of the RP Belonging Committee, which designs tracks and programs intended to amplify the roles of minority groups in the practice. Dr. Hawk has led outreach to diverse radiology residents and fellows, and serves on the ACR’s Commission for Women and Diversity, Stanford’s Radiology Diversity committee, and Ambra’s #Radxx board.


Congratulations to this year’s Imaging Wire Award winners and nominees, who’s efforts to elevate radiology is truly inspiring. Also, thanks to this year’s amazing judges and everyone who nominated these very deserving imaging professionals!

The 2020 Imaging Wire Award judges include: Bill Algee of Columbus Regional Hospital, Dr. Jared D. Christensen of Duke University Health, Dr. Keith J. Dreyer of Partners Healthcare, Dr. Allan Hoffman of Commonwealth Radiology Associates, Dr. Terence A.S. Matalon of Einstein Healthcare Network, and Dr. Syam Reddy of University of Chicago Ingalls Memorial.



Imaging Wire Q&A: Evolving With Hitachi VidiStar

With John Stock, MD, FACC
Pediatric Cardiologist
Pediatric Cardiac Care of Arizona


The role of imaging in pediatric cardiology has evolved tremendously in recent years, so in order for these practices to operate successfully, their PACS systems have to evolve at the same pace. That can be easier said than done, but it’s exactly what happened with Pediatric Cardiac Care of Arizona and its VidiStar PACS system.

In this Imaging Wire Q&A, we sat down with Dr. John Stock of Pediatric Cardiac Care of Arizona to discuss the evolving role of imaging in his practice, how Hitachi’s VidiStar PACS has evolved along with it, and what pediatric cardiology practices should look for in their own PACS systems.



Tell us about your practice and how you use imaging.

We perform and interpret approximately 3,000 pediatric studies per year. I interpret all the cardiac ultrasound studies independently after reviewing and confirming measurements.

From there, VidiStar generates a report that is often faxed to the referring physician. The studies are digitally archived on our server and in the cloud, with reports maintained in the electronic medical records. We follow all appropriate use guidelines and quality assurance initiatives, and we are an IAC accredited lab.



How has your practice been impacted by the COVID-19 pandemic?

COVID-19 has definitely affected my practice, but not how some might think. We experienced a 20% to 30% drop in patient volumes during the shutdown’s peak months. There appears to have been a rebound, as children and adolescents returned back to their pediatricians, schools, and sports.

Pediatric patients with congenital heart disease have a higher risk of complications. As a result, we are cautious in our follow up and in some cases evaluating for possible findings related to COVID-19. There is also a subset of the disease called multi-inflammatory syndrome of childhood (MIS-C), which can result in decreased ventricular function and coronary artery dilation. This requires prompt management and follow-up.



You’ve been using VidiStar for quite a while, can you share how you use it?

I use VidiStar on a daily basis for interpreting and completing reports on my pediatric, adult congenital, and fetal cardiology patients. This includes looking at the study as the sonographer performs an evaluation, followed by an independent review with measurements confirmed by the VidiStar reporting package, and then saving the study to our server.



How has VidiStar changed over time?

VidiStar has come a long way since Hitachi acquired the platform two-plus years ago, turning it into a system that is affordable, user friendly, and can support the simplest and most complex pediatric cases.

I’ve benefited most from the improvements to VidiStar’s pediatric reporting package. At first, VidiStar’s pediatric package was very basic and utilized an adult format, requiring me to do a lot of work outside of the platform. Kids are not small adults. They have their own complexities. The reports need to reflect the variation in anatomy that can occur in congenital heart disease.

Hitachi came in, made a commitment to pediatrics, and VidiStar now fits the needs of most pediatric cardiology practices. In just the last two years, the pediatric package improved many measurement parameters and Doppler measurements, which allow me to perform comparisons over time.



What advice can you share for pediatric cardiologists evaluating new PACS systems?

Any independent pediatric cardiology provider considering a new PACS system should evaluate how each system would meet their clinical and workflow needs and whether it fits their budget.

Most important for me clinically, is the ability to track changes over time and knowing that I can be confident when I send out reports to some of the best centers in the country. The reports must also look professional, with appropriate identification of pertinent impressions, as well as documentation of pertinent positive and negative findings.

From a workflow perspective, it is also very important that the PACS system interfaces well with the electronic medical records, and that it’s easy for both the sonographer and the physician to use.

It’s also crucial that the PACS system works consistently. By that, I mean that the system always works and its output is reproducible and consistent over time, which isn’t guaranteed with some platforms.

Pediatric cardiologists should also look for reporting packages that clearly document Z scores and Doppler velocities, which are necessary for appropriate billing. Incorporation of 3-D and strain will also be necessary going forward.


About Pediatric Cardiac Care of Arizona

Based in Tempe, Arizona, PCCA’s mission is to partner with patients, families, and referring physicians in order to provide excellent outpatient cardiac care in an environment of trust, openness, and professionalism.

Dr. John Stock has cared for patients with congenital and acquired heart disease for over 20 years, after receiving his medical degree from Upstate Medical Center, completing his pediatric residency at Phoenix Children’s Hospital, and undergoing fellowship at Oregon Health Sciences University.


About Hitachi VidiStar

The Hitachi VidiStar Platform gives physicians and healthcare providers the ability to read and interpret diagnostic studies over the internet for timely interpretation, improved patient diagnosis, clinical decision support, and advanced patient data analytics and notification.

VidiStar provides highly customizable infrastructure for multi-modality viewing, reporting, and analytics while interfacing with existing IT systems for one seamless solution.

Nominations Open for the 2020 Imaging Wire Awards

Nominations are now open for the 2020 Imaging Wire Awards, honoring this year’s most outstanding contributors to radiology practice and outcomes.

The 2020 Imaging Wire Awards will be presented to seven imaging professionals for achievements in the following categories:

  • COVID Hero: for excellence in COVID-19 care and research
  • Insights to Action: recognizes efforts to reduce unnecessary imaging
  • Diagnostic Humanitarian: for achievements supporting equity in patient care
  • AI Activator: recognizes actions to use artificial intelligence to improve patient care
  • Continued Care: honoring efforts to maintain patient care throughout the COVID-19 emergency
  • Burnout Fighter: for addressing inefficient work practices that lead to physician burnout
  • Cornerstone: honoring non-physicians for outstanding contributions to the practice of radiology
  • Diversity and Inclusion: recognizing efforts to improve diversity and inclusion in imaging

Those interested in applying or nominating a colleague for one of the above Imaging Wire Awards can do so until November 5th through this link .

Winners will be selected by a panel of industry leaders and recognized during RSNA 2020.

The 2020 Imaging Wire Awards judges committee includes:

  • Bill Algee, FAHRA CRA – Columbus Regional Hospital
  • Jared D. Christensen, MD, MBA – Duke University Health
  • Keith J. Dreyer, DO, PhD, FACR, FSIIM – Partners Healthcare
  • Allan Hoffman, MD – Commonwealth Radiology Associates
  • Terence Matalon, MD, FACR, FSIR – Einstein Healthcare Network
  • Syam Reddy, MD – University of Chicago Ingalls Memorial, Radiology Partners Chicago

About The Imaging Wire

The Imaging Wire is a newsletter and website dedicated to making it easy for the people of medical imaging to be well informed about their specialty and industry. Read twice weekly by thousands of global radiology professionals, The Imaging Wire is the first publication from business news company, Insight Links, which is dedicated to expanding news literacy across healthcare. For more information: https://theimagingwire.com/.

Imaging Wire Q&A: HAP Redefines Partnership

They say that in times of crisis, you get to know who your real friends and partners are. This adage gained new significance for Triad Radiology Associates earlier this year, as the COVID-19 pandemic upended its operations and Healthcare Administrative Partners (HAP) stepped up to help guide the practice through this unpredicted disruption.

In this Imaging Wire Q&A, we sat down with Darlene Clagett, Director of Revenue Cycle Management at Triad Radiology Associates, and Rebecca Farrington, Chief Revenue Officer at Healthcare Administrative Partners, to discuss their partnership and how it evolved during the COVID-19 emergency.



The Imaging Wire: You’ve been working with Healthcare Administrative Partners (HAP) for over two years. Can you share a bit about how your revenue cycle management operations improved since you started working with them?


Darlene: We looked at multiple revenue cycle vendors during our evaluation process. Our process was very thorough because we wanted a partnership that would sustain us through whatever challenges might come our way. At Triad we have invested in our leadership, our employees, and our technology, and we wanted a revenue cycle partner that made those same investments.

We feel like we have a true partnership with HAP. We communicate frequently and work together to consistently improve our metrics. We have seen improvement in many areas. Coding accuracy is much better, fewer days in AR, fewer denial write-offs, reconciliation of services to charges is managed monthly, and net collections has increased.



The Imaging Wire: The pandemic has caused unique challenges that have never been experienced by private independent radiology groups. Did you look to Healthcare Administrative Partners to add value beyond their standard services? Can you talk about what they did above and beyond a normal revenue cycle scope of service?


Darlene: It became immediately evident that the pandemic was having a dramatic impact on radiology practices and their revenue cycle partners, as volumes reduced significantly with stay at home orders.
From the beginning, HAP kept us updated regarding their ability to maintain operational excellence, while protecting their staff by quickly moving to a virtual environment. They had the IT infrastructure in place to ensure that processes were executed securely.

We requested estimated cash flow projections to assist us in planning and applying for relief programs and these were provided promptly. HAP also kept us apprised of relief program updates as they were happening and provided recommendations for methods to apply. In a couple of instances, they gave us advice and prompt updates that made the difference in our ability to receive relief funds.



The Imaging Wire: Rebecca, RCM scope of services are typically the nuts and bolts of the billing process. When the pandemic began, were you concerned about how you would be able to support your clients’ new challenges?


Rebecca: The short answer is yes. As a small business ourselves, we had to find the right resources to guide our decisions around ensuring our financial security as well as doing our part to protect both the safety and the financial impact of our employees. It also quickly became clear that our clients could benefit from our research and connections.

The easy thing to do would have been to “stay in our RCM lane” and do nothing – but that is the difference between a vendor and a true partner. It was not necessarily in our scope of service to advise on financial matters involving small business loans, but these are unprecedented and confusing times that called for new and different action on our part. There is an accountability and a responsibility that comes with making recommendations like these, so we did not take the decision to share them lightly. We did our homework, double-checked our resources, brought in our experts, and did our best to step up for our clients.



The Imaging Wire: Now that imaging volumes are ramping up, how are you working with HAP to prepare for the post-COVID rebound?


Darlene: HAP has played an integral role in helping us plan for the return to our pre-COVID revenue numbers. They have done a great job helping us build out revenue projections, which has helped with our staffing plans. They also provided guidance on PPP programs to help fill the revenue gap as volume improves. HAP now provides us with current weekly volume comparisons to pre-COVID dates so we can see how we are progressing in our return to previous numbers. They have also shared data so we can compare our rebound to that of our peers. This benchmarking is critical to revenue planning.


The Imaging Wire: When you made the decision to change RCM partners, you underwent a very detailed evaluation of the market and available options. After your experience with HAP these last two years, what recommendations do you have for groups that are beginning the process of considering an alternative to their current RCM set up?


Darlene: Any radiology practice considering RCM partners should prepare a detailed Q&A for their RFP so that they ask the same questions to each company. It’s also important to request involvement in the process from the people you will work with day to day.

Practices need to decide what they are looking for and make sure they are comfortable that their selection can provide it. We wanted a partner that would function as “our” billing department with dedicated staff to handle Triad. We are very pleased with the decision that we made and the great job that HAP is doing for us.



The Imaging Wire: Rebecca, you said that HAP acts as a true partner to your clients, not just a vendor. What does partnership mean to you?

Rebecca: Partnership is a two-way street. It is a relationship that is mutually beneficial and supportive, setting both parties up for success. It means stepping up and doing what you need to do to help, not because you have to, but because it is the right thing to do. Our client’s success is our success.


About Triad Radiology Associates:

Triad Radiology has supported the Piedmont Triad, North Carolina area with high-quality imaging and radiology services for over 50 years. Triad’s 45 diagnostic and interventional radiologists, state-of-the-art technology, and patient-centric approach assure that its patients can get the care they need and get back to the important things in life.

About Healthcare Administrative Partners:

Healthcare Administrative Partners (HAP) empowers hospital-employed and privately owned radiology groups to maximize revenue and minimize compliance risks despite the challenges of a complex, changing healthcare economy. HAP goes beyond billing services, delivering the clinical analytics, practice management, and specialized coding expertise needed to fully optimize revenue cycles. Since 1995, radiologists have turned to HAP as a trusted educator and true business partner.

Imaging Wire Q&A: Quantifying Riverain Technologies ClearRead CT

With Professor Thomas Frauenfelder
Deputy Director of Diagnostic and Interventional Radiology
University Hospital of Zurich

It says a lot when a solution works so well for a radiology department that they decide to perform a study to quantify its benefits. That is exactly what happened at the University Hospital of Zurich (USZ): USZ set up a study on the clinical and workflow benefits of Riverain™ Technologies ClearRead™ CT after implementing the solution into its chest CT workflow.

In this Imaging Wire Q&A, we sat down with Professor Thomas Frauenfelder, Deputy Director of Diagnostic and Interventional Radiology at USZ, to discuss how ClearRead CT improved his team’s chest CT reading performance. The study they performed quantified efficiency and accuracy along with key observations to aid other radiology teams looking to bring new CAD solutions into their workflows.




The Imaging Wire: Tell us about your team and how you handle Chest CT reading volume?

Professor Frauenfelder: The Institute of Diagnostic and Interventional Radiology at the University Hospital of Zurich consists of about eighteen staff radiologists and twenty residents. Last year we performed around 35,000 CT scans, 40% of which were chest CTs. For reading, we mainly use a standard PACS system.

Since we do not have a lung cancer screening program, most CT scans are related to either trauma, vascular pathologies, tumor diagnosis and follow-up, or interstitial lung diseases. During daytime shifts, about three staff radiologists read up to 70 CT scans.


The Imaging Wire: Why did you start using ClearRead CT and how do you use it?

Professor Frauenfelder: Several years ago, we evaluated a number of applications for lung nodule detection. Although many applications had a very high detection rate, we seldom used them because our radiologists were forced to open a second application just to see the results. Even then, it was common that when our radiologists opened the second application, the cases had not been read by the system.

The advantage of ClearRead CT is that it sends the “nodule-only” images back into the PACS, where they can be reviewed side by side with the “normal” lung window by forming specific hanging protocols. Our radiologists liked this type of display because they were able to stay in the system and quickly get an overview of possible lung nodules.


The Imaging Wire: Is that what inspired you to perform your study?

Professor Frauenfelder: We found that radiologists were able to review cases much more efficiently and safely with this type of display, especially the young residents. Since there was limited scientific data on the use of the software, we decided to conduct a study to confirm ClearRead CT accuracy and efficiency.

For the study, we created vessel-suppressed reconstructions of 100 patients’ contrast-enhanced chest CTs using ClearRead CT. The two sets of images were read by two groups of three radiologists, finding that vessel-suppressed CTs had 21% greater nodule detection rates, much higher interreader-agreement rates, and about 20% shorter average read times.


The Imaging Wire: What were the most compelling takeaways?

Professor Frauenfelder: Well, we expected that the results would be in favor of ClearRead CT concerning the detection rate and reading time, but it was surprising that the advantages were so significant.


The Imaging Wire: What was your experience with respect to ClearRead CT’s ease of installation and integration into the workflow?

Professor Frauenfelder: ClearRead CT was very easy to install for our ICT. The advantage is that we can adapt many parameters on our own, especially if CT protocols are changing. This gives us a lot of flexibility.

Because all post-processed images are directly stored into the PACS, they are accessible without changing applications. This saves a lot of time. We can also access the results in more detail by using the Web interface, if needed.

Overall, it keeps workflow running very smoothly.


The Imaging Wire: Based on your research and experience with ClearRead CT, what do you see as the most important qualities to look for in a CAD product?

Professor Frauenfelder: Well, many products today are very accurate for the depiction of pulmonary nodules. Some might be too sensitive. Since we do not have a lung cancer screening program, it is important that the system fits into our existing workflow and that it assists the radiologist by providing a nodule-specific recommendation about follow-up. Furthermore, the results should be easily transferable into reports.


The Imaging Wire: Do you have experience with any other ClearRead applications (e.g., ClearRead Xray| Bone Suppress) and if so, can you share about the other ClearRead applications you’ve used?

Professor Frauenfelder: We also use ClearRead Xray with both bone suppression and image enhancement. Our first impression is that ClearRead Xray helps us see pathologies more clearly and more accurately. ClearRead Xray installation and workflow were also very easy, and we’ve benefited from being able to integrate the images in specific hanging protocols on our existing PACS review station.

We actually also performed a study evaluating the use of ClearRead Xray for COVID-19 diagnosis that we’ll publish in the future. In the retrospective study, we evaluated the diagnostic accuracy of conventional radiography (CXR) and enhanced CXR (eCXR/ClearRead Xray) for the detection and quantification of disease-extent in COVID-19 patients compared to chest CT. Our initial findings show that the use of ClearRead Xray increases interreader agreement and has a higher sensitivity for the detection of the consolidation. So it seems that ClearRead Xray improves the detection of COVID-like pneumonia. However, further analysis is needed.


About Professor Frauenfelder. About Professor Frauenfelder. Thomas Frauenfelder is a professor of radiology at the University Hospital of Zurich (USZ), as well as its head of chest imaging, and deputy director of the Institute for Diagnostic and Interventional Radiology. He has a special interest in medical imaging and architecture of PACS in the hospital environment.

Imaging Wire Q&A: Qure.ai and MEDNAX Validate AI in the Wild

As the number of available imaging AI algorithms grow each month, the ability to truly validate a model’s performance and use that validation to enhance its clinical and operational performance has arguably become more important than the study-based accuracy claims that had everyone so impressed just a few years ago.

You could say that we’re at the “prove it and improve it” phase of the imaging AI adoption curve, which is what makes Qure.ai’s recent algorithm validation partnership with MEDNAX and vRad so interesting – and so important.

In this Imaging Wire Q&A, we sat down with Chiranjiv Singh, Qure.ai’s Chief Commercial Officer; Brian Baker, vRAD’s Director of Software Engineering; and Imad Nijim, MEDNAX Radiology and vRad’s CIO, to discuss the origins and results of their efforts to validate Qure.ai’s qER solution “in the wild.” Here it is:



The Imaging Wire: How did Qure.ai and MEDNAX come to work together?

Brian Baker: To explore the Qure.ai and MEDNAX partnership’s establishment, a quick history of the MEDNAX AI incubator is important. MEDNAX has been working with AI partners since 2015 in various forms with the primary goal of improving patient care. Qure.ai was one of the earlier partners in that process. Before the incubator was officially launched in 2018, Qure.ai was already collaborating on advanced solutions.

One important thing we bring to these AI partnerships is our massive and diverse data. MEDNAX Radiology Solutions has 2,000-plus facilities in all 50 states. We have radiologists all across the country reading over 7.2 million studies on the MEDNAX Imaging Platform. We have an enormous, heterogeneous data set. The data is not only representative of a very diverse population, but also a very diverse set of modality models, configurations, and protocols.

My primary focus for AI at MEDNAX Radiology Solutions is first and foremost patient care – helping patients is our number one goal. But also important, we want to foster a community of AI partners and use models from those partners in the real world. A big part of that is building models and/or validating models.

Qure.ai came to us with models already built on different data sets. They didn’t need our data set to perform additional model training, but they wanted to do real world validations to ensure their models and solutions were generalizing well in the field against an extremely large and diverse cohort of patients.

That is where the relationship blossomed. Our partnership first focused on the complex aspects of how we see different use cases from a clinical standpoint; we very much align on both use cases and pathologies; this alignment is a critical step for everyone – AI vendors and AI users in radiology alike. The clinical nuances to using a model in production are incredibly intricate, and Qure.ai and MEDNAX’s convergence in this area is a large part of our success.


Chiranjiv Singh: From our inception as a company, there was a clear emphasis that Qure.ai as a brand has to stand for proving the applicability of our product in a real-world context. And, for us to make a significant impact for our customers and their patients, the results have to be highly measurable. This implies that our solutions need to be extensively tested and credible at every level. To achieve this degree of validation requires a high volume and variety of independent data sets and it also needed us to expose our algorithm to rigorous test conditions.

That is where our strategic goals aligned with MEDNAX’s goals – and, together, with the MEDNAX team, we started calling this validation exercise “testing in the wild.” The Qure.ai team saw the value of partnering with someone of MEDNAX’s size and caliber to drive the variety, volume and rigor to help us validate every aspect of our solution. Without us leveraging the scale and the volumes of MEDNAX, we would never been able to achieve it in that short period of time unless we worked with roughly 100 different hospitals in the U.S.

What made the partnership stronger was the caliber of the MEDNAX team and the overall platform that they provided for us to jointly learn and improve. And, for these reasons, a very strategic alignment came about for both our teams, jointly working to make this “validation in the wild” a successful project for us both.


Brian: I believe only half the problem is proving your sensitivity and specificity with a large, diverse patient cohort. That is obviously extremely important for clinical and ethical reasons, but the other part of the problem to solve is figuring out how to ensure that a solution or model works on all the various types of DICOM in the industry. At MEDNAX Radiology Solutions, we see everything in DICOM that you can imagine and some you would not believe. That might be anything from slightly weirdly-formed DICOM to data in non-standard fields where it shouldn’t be or secondary captures or other images inside of the study, down to all the protocols involved in imaging (how the scan is actually acquired). With our scale and diversity of data, a model that can operate without erroring and crashing through a single night is an engineering feat on its own.



The Imaging Wire: Brian, can you share about the test, the results, and takeaways?

Brian: We’ve taken Qure.ai’s container-based solution that includes the AI models and plugged it in MEDNAX Radiology Solutions’ own inference engine. In our inference engine, image studies flow to models/solutions that are in a validation run in nearly the same way that the models/solutions will run if they successfully pass our validation. The major difference is that during validation, the results of the models do not initiate any action in the MEDNAX Imaging Platform – instead we just gather the data.

As imaging studies flow through the inference engine, we capture the results along with the results of Natural Language Processing (NLP) models run against our clinical reports (from radiologists). This allows us to very quickly determine how a model is doing at MEDNAX scale. We compare the NLP results to the Image AI results and have a very good understanding of how the model is performing within the MEDNAX Imaging Platform.

My team monitors all models on a continuous basis. For models being validated, this data is what makes up the core basis of our validation process. For models that have already been validated, this continuous monitoring ensures that models remain within approved thresholds – if a model successfully goes through our validation process and is approved by clinical leadership, it is important that the model continues to operate with the same sensitivity and specificity. If for any reason the data changes (patient demographic makeup, image acquisition changes, etc.) and the model no longer performs to our standards, we are alerted and remove that model from usage on the clinical platform.

For a validation run, we typically run a model for two weeks, and then capture those two weeks of data for further evaluation. The Qure.ai model has been running for several months to make sure it is hardened and successful. There were 300,000 studies that passed through when we looked in October. While the validation set is only 2 weeks of data, Qure.ai’s model held a consistent sensitivity and specificity throughout the process of integration.

For the validation evaluation, we built a validation document for Qure.ai that explores not only sensitivity and specificity against various NLP definitions, but also smaller hand-reviewed sub-cohorts as well as added analysis focused on sex and age breakdowns.



The Imaging Wire: What were some of the key takeaways for Qure.ai in terms of validation and learning about how your models performed “in the wild?”

Chiranjiv: We learned a great deal as a result of going through this process. A lot of work went into the back-end R&D process – in terms of relooking at our data science models and engineering analysis and really pinpointing where the weak points are and where the model can potentially break down. Our team was able to use the feedback and look at real clinical cases to fix these shortcomings and test them again with constant feedback models coming in through MEDNAX. This has made our solution more accurate, our predictive analytics sharper and our engineering ability far stronger than when we started out. Having the ability to go through the exercise of assessing 300,000 exams in a performance evaluation is a powerful proving ground. We are confidently sharing this with our customers by pointing out the fact that “the accuracy or performance of a model is only one part of fulfilling the promise of making AI real.”

The way the MEDNAX Imaging Platform is set, it’s like getting nearly real time, or live feedback on potential areas of error, improving the model and seeing your false positives and false negatives reduce with every round of testing. We learned so much looking at the variety of data, different kinds of DICOMs, incorrect DICOM tags, diverse acquisition protocols, every possible CT manufacturer, varying slice thicknesses, etc. Even though we had a lot of that before this partnership, this experience gave us an opportunity to bring stronger products to the market.

The next step for us is to share this with our potential customers and leverage this partnership to further spread the word that “making AI real” is not just about algorithm accuracy. Yes, accuracy is a critical piece – but if for example, you’re not beating speed requirements (like those vRad and MEDNAX Radiology Solutions had) then there is no point to take 10 minutes to read a CT when the entire turnaround is less than 10 minutes.

As a result of this partnership, we have made significant strides in our journey from innovative data models to working AI products. The Qure.ai team now has the ability and the confidence that, if any large client wants to deploy “AI in the real world,” we have the expertise and experience in handling the kind of volume and variety that we would have never experienced without working with vRad and MEDNAX Radiology Solutions.



The Imaging Wire: Many in the AI research community highlight a need for multi-center prospective studies, what role do you think this type of partnership can play in the absence of these studies or as a contributor to these studies?

Brian: I view MEDNAX Radiology Solutions’ role in the AI community as a mandate to help companies such as Qure.ai run large multi-center validations. Often, the community at large views this type of validation as important due to the diverse population of patients. And while I agree that is incredibly important, it is worth noting that it is also important to validate against various DICOM implementations and image study acquisition parameters.


Imad Nijim: There is obviously a lot of research going into this and the academics are very active with this work. For us, a big focus is on the real-life implications of this, and there was really hard work on both sides. One of the first steps was defining intracranial hemorrhage, and MEDNAX and Qure.ai had different definitions that they had to reconcile. They had to dig into the minutiae of their definitions and their results went into the AI model and imaging model that they built together.


Chiranjiv: This was not a validation study with one institution that has a standard protocol, defined patient profile, limited device inputs, etc. This is the fastest and closest you get to a multi-center study as the exams are coming from 100’s of different medical facilities across the country. MEDNAX gave us the ability to validate the algorithm with a diverse data set, different user settings, equipment types and all the other variability that a multi-center study would offer.



The Imaging Wire: Do you have any final thoughts on this partnership?

Chiranjiv: During this experience there was clear alignment on identifying the end value. We both realized that this project is not just about improving accuracy. If this is done well, it will influence decisions that directly impact patient lives. Most of the clinical cases involved CT scans being read as part of night services for medical facilities across the U.S. Many of these facilities, especially the smaller community-based hospitals, may not have experts to read these exams, especially during late-night hours. Our team had the context that if we do all this hard work to get the engineering, accuracy, and clinical definitions right, it positively impacts the patient. We can be the catalyst that makes the difference for that one patient. That has to be the north star. And this vision was what aligned Qure.ai and MEDNAX in the first place –and it’s what drove us to really get this right.


Imad: People that focus on the technology aspect of AI will get tripped up. The questions that people need to ask are: What problem are they solving? What workflow are they optimizing? What condition are they trying to create a positive outcome for? These are the questions that we need to ask and then back into it with the technology component. It sounds simple, but a lot of people don’t understand that and it’s a big differentiator between the successful and unsuccessful companies.


Nominations Open for First Annual Imaging Wire Awards

San Diego, California – October 7, 2019 – The Imaging Wire today announced that nominations are open for the first annual Imaging Wire Awards, honoring 2019’s most outstanding contributors to radiology practice and outcomes.

The Imaging Wire Awards will be presented to five imaging professionals for achievements in the following categories:

  • Insights to Action: recognizes efforts to reduce unnecessary imaging
  • Diagnostic Humanitarian: for achievements supporting equality in patient care, in the U.S. or internationally
  • AI Activator: recognizes actions to use artificial intelligence to improve patient care
  • Burnout Fighter: for addressing inefficient work practices that lead to physician burnout
  • Cornerstone: honoring non-physicians for outstanding contributions to the practice of radiology

Those interested in applying or nominating a colleague for one of the above Imaging Wire Awards can do so until November 8th through this link. Winners will be selected by a panel of industry leaders and recognized at RSNA 2019 in Chicago, Illinois.

The 2019 Imaging Wire Awards judges committee includes:

  • Bill Algee, CRA, FAHRA – Columbus Regional Hospital
  • Keith J. Dreyer, DO, PhD, FACR, FSIIM – Partners Healthcare
  • Terence A.S. Matalon, MD, FACR, FSIR – Einstein Healthcare Network
  • Jonathan Messinger, MD – Baptist Health South Florida
  • Pooja Rao, MBBS, PhD – Qure.ai
  • Irena Tocino, MD, FACR – Yale University
  • Syed Furqan Zaidi MD, MBA – Radiology Partners


About The Imaging Wire

The Imaging Wire is a newsletter and website dedicated to making it easy for the people of medical imaging to be well informed about their specialty and industry. Read twice weekly by thousands of global radiology professionals, The Imaging Wire is the first publication from business news company, Insight Links, which is dedicated to expanding news literacy across healthcare. For more information: https://theimagingwire.com/

Imaging Wire Q&A: Qure.ai’s Stroke Solution

I met Dr. Pooja Rao last year through a very revealing email exchange. I sent Pooja a note to share some recent Qure.ai coverage and invite her to subscribe and she responded with a series of questions about the tools we use to automate this type of outreach. It was at that moment that I realized Dr. Rao is uniquely solutions oriented.

As Qure.ai’s co-founder and Head of R&D, Pooja is usually focused on solving far more important issues than email automation, using her background in medicine, data science, and neuroscience to make healthcare more accessible and affordable through deep learning.

In this first-ever Imaging Wire Q&A, we sat down with Pooja to discuss the current challenges in stroke and head trauma treatment and how AI solutions, such as Qure.ai’s qER product, stand to improve clinical outcomes. Here it is:


What drew Qure.ai to stroke and head trauma AI?

Pooja Rao: Stroke is one of the leading causes of death and long-term disability worldwide. Patient outcomes depend strongly on how quickly stroke is diagnosed and treated, measured as ‘symptom onset-to-needle’ time.

Most patients with a stroke go through an accelerated stroke protocol that includes rapid imaging and review, but there are many others with brain bleeds (stroke-related or otherwise) who are outside of this protocol. For example, a patient who’s already in the hospital for an ischemic stroke, but gets an acute bleed during treatment. That’s where you need AI that works in the background to pick up these scans and prioritize the right patients.

Over 2.5 million people suffer head injuries in the U.S. every year. A fraction of those will require urgent neurosurgical intervention – and imaging is key to making that decision. The use of CT scans in the emergency room has been on the rise for decades, which means that radiologists in turn have long lists of ‘STAT’ scans to review. If AI could scan through these and push the critical ones to the top of the list it would save a lot of valuable time for these patients.


What are the current stroke and trauma guidelines and how does AI fit in?

Pooja Rao: The 2018 American Stroke Association/American Heart Association (AHA/ASA) stroke guidelines say that non-contrast CT provides the information needed to make decisions about acute stroke management in most cases. They also say that the primary role of a head CT scan for patients with stroke symptoms is to rule out a bleed, and that there is no evidence for making treatment decisions based on the subtle CT signs of ischemia.

Further, they advocate for using non-contrast CTs to screen patients because it’s cost-effective. This means that radiologists’ head CT volume continues to grow. High-volume practices can have as many as 20 head CTs an hour in addition to all the other studies they read. Simply flagging critical scans would add a lot of value here.

Stroke centers are also required to score intracranial bleeds by volume. This is another area that AI can save time for radiologists, by marking out brain hemorrhage and measuring its volume.


What has research revealed about the performance of AI solutions for stroke and head trauma?

Pooja Rao: Standalone studies show that the technology works well and is safe and effective enough to be used in clinical practice, and it sounds like regulatory bodies agree, given the recent clearance of AI products to triage critical scans and assist radiologists.

Our own study, published last year in The Lancet showed that qER accurately detects not only bleeds but also other critical head CT scan abnormalities like mass effect (sometimes the only early sign of a tumor), midline shift, and cranial fractures.


What about in clinical use?

Pooja Rao: As we deploy at more hospitals and imaging centers, we’re generating evidence that AI works just as well in the clinical setting as it does in the lab. In addition to proving that the technology generalizes well (performs with high accuracy independent of the CT scanner model or population), we’re also quantifying the clinical benefit to patients, radiologists, and other physicians. When we evaluate the benefits of AI for stroke and head trauma we look at:

  • How much time is saved when critical scans are prioritized by AI?
  • How does this prioritization impact other studies on the worklist?
  • How are patient outcomes impacted?



Where is head trauma and stroke AI being adopted first and who’s finding it most beneficial?

Pooja Rao: There is a lot of AI research coming out of academic centers, where quality of care is the highest and there’s an abundance of the best and brightest doctors. But care and radiology standards aren’t uniform across the world, or even within the U.S.

We’re seeing that the earliest serious AI adopters are community hospitals and remotely located healthcare providers where there may not be reliable, accurate 24×7 radiologist coverage. It also seems that geographies with a shortage of expert care are taking the lead in adopting AI, reflecting where value is truly being added.

Of course, there is still a long way to go and there are a lot of questions that need answers. Is the role of AI to prevent tired doctors from missing critical findings, help save time dictating reports, or to prioritize critical scans on busy worklists? Is it all three?


And how are these solutions benefiting patients and radiologists?

Pooja Rao: For patients, a lot of the benefit of AI is access – just having access to rapid, accurate diagnosis and treatment, and not having to wait hours in the ER.

For radiologists, the benefits of AI differ based on the setting in which they operate. Busy urban practices or teleradiology setups benefit the most from having critical cases automatically flagged for review. Many radiologists also like having bleeds and midline shifts quantified because it saves them time. In places where radiologist coverage is sparse, radiologists and other clinicians find the mobile phone alerts with non-diagnostic preview images particularly useful.

These are exactly the patient and radiologist benefits we targeted with qER.


What’s the next frontier for head trauma and stroke AI?

Pooja Rao: Everyone wants algorithms that can be superhuman and see abnormalities that radiologists can’t, but there are easier problems to solve first.

One of these is incorporating clinical knowledge. In studies that we’ve done, we’ve observed that radiologists are at their most accurate when provided the full clinical context. We’re now training AI to incorporate that clinical context.

Another one is predicting long-term outcomes. qER already measures the volume of the abnormalities it detects to help study progression in patients with traumatic brain injury. We’re now going beyond quantification and progression monitoring to using these measures to predict patient outcomes.


Thank you, Pooja. It’s exciting to watch Qure.ai work with global healthcare providers to address serious conditions like stroke and tuberculosis and we can’t wait to see what’s next.


You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team