Nominations Open for the 2020 Imaging Wire Awards

Nominations are now open for the 2020 Imaging Wire Awards, honoring this year’s most outstanding contributors to radiology practice and outcomes.

The 2020 Imaging Wire Awards will be presented to seven imaging professionals for achievements in the following categories:

  • COVID Hero: for excellence in COVID-19 care and research
  • Insights to Action: recognizes efforts to reduce unnecessary imaging
  • Diagnostic Humanitarian: for achievements supporting equity in patient care
  • AI Activator: recognizes actions to use artificial intelligence to improve patient care
  • Continued Care: honoring efforts to maintain patient care throughout the COVID-19 emergency
  • Burnout Fighter: for addressing inefficient work practices that lead to physician burnout
  • Cornerstone: honoring non-physicians for outstanding contributions to the practice of radiology
  • Diversity and Inclusion: recognizing efforts to improve diversity and inclusion in imaging

Those interested in applying or nominating a colleague for one of the above Imaging Wire Awards can do so until November 5th through this link .

Winners will be selected by a panel of industry leaders and recognized during RSNA 2020.

The 2020 Imaging Wire Awards judges committee includes:

  • Bill Algee, FAHRA CRA – Columbus Regional Hospital
  • Jared D. Christensen, MD, MBA – Duke University Health
  • Keith J. Dreyer, DO, PhD, FACR, FSIIM – Partners Healthcare
  • Allan Hoffman, MD – Commonwealth Radiology Associates
  • Terence Matalon, MD, FACR, FSIR – Einstein Healthcare Network
  • Syam Reddy, MD – University of Chicago Ingalls Memorial, Radiology Partners Chicago

About The Imaging Wire

The Imaging Wire is a newsletter and website dedicated to making it easy for the people of medical imaging to be well informed about their specialty and industry. Read twice weekly by thousands of global radiology professionals, The Imaging Wire is the first publication from business news company, Insight Links, which is dedicated to expanding news literacy across healthcare. For more information: https://theimagingwire.com/.

Imaging Wire Q&A: HAP Redefines Partnership

They say that in times of crisis, you get to know who your real friends and partners are. This adage gained new significance for Triad Radiology Associates earlier this year, as the COVID-19 pandemic upended its operations and Healthcare Administrative Partners (HAP) stepped up to help guide the practice through this unpredicted disruption.

In this Imaging Wire Q&A, we sat down with Darlene Clagett, Director of Revenue Cycle Management at Triad Radiology Associates, and Rebecca Farrington, Chief Revenue Officer at Healthcare Administrative Partners, to discuss their partnership and how it evolved during the COVID-19 emergency.



The Imaging Wire: You’ve been working with Healthcare Administrative Partners (HAP) for over two years. Can you share a bit about how your revenue cycle management operations improved since you started working with them?


Darlene: We looked at multiple revenue cycle vendors during our evaluation process. Our process was very thorough because we wanted a partnership that would sustain us through whatever challenges might come our way. At Triad we have invested in our leadership, our employees, and our technology, and we wanted a revenue cycle partner that made those same investments.

We feel like we have a true partnership with HAP. We communicate frequently and work together to consistently improve our metrics. We have seen improvement in many areas. Coding accuracy is much better, fewer days in AR, fewer denial write-offs, reconciliation of services to charges is managed monthly, and net collections has increased.



The Imaging Wire: The pandemic has caused unique challenges that have never been experienced by private independent radiology groups. Did you look to Healthcare Administrative Partners to add value beyond their standard services? Can you talk about what they did above and beyond a normal revenue cycle scope of service?


Darlene: It became immediately evident that the pandemic was having a dramatic impact on radiology practices and their revenue cycle partners, as volumes reduced significantly with stay at home orders.
From the beginning, HAP kept us updated regarding their ability to maintain operational excellence, while protecting their staff by quickly moving to a virtual environment. They had the IT infrastructure in place to ensure that processes were executed securely.

We requested estimated cash flow projections to assist us in planning and applying for relief programs and these were provided promptly. HAP also kept us apprised of relief program updates as they were happening and provided recommendations for methods to apply. In a couple of instances, they gave us advice and prompt updates that made the difference in our ability to receive relief funds.



The Imaging Wire: Rebecca, RCM scope of services are typically the nuts and bolts of the billing process. When the pandemic began, were you concerned about how you would be able to support your clients’ new challenges?


Rebecca: The short answer is yes. As a small business ourselves, we had to find the right resources to guide our decisions around ensuring our financial security as well as doing our part to protect both the safety and the financial impact of our employees. It also quickly became clear that our clients could benefit from our research and connections.

The easy thing to do would have been to “stay in our RCM lane” and do nothing – but that is the difference between a vendor and a true partner. It was not necessarily in our scope of service to advise on financial matters involving small business loans, but these are unprecedented and confusing times that called for new and different action on our part. There is an accountability and a responsibility that comes with making recommendations like these, so we did not take the decision to share them lightly. We did our homework, double-checked our resources, brought in our experts, and did our best to step up for our clients.



The Imaging Wire: Now that imaging volumes are ramping up, how are you working with HAP to prepare for the post-COVID rebound?


Darlene: HAP has played an integral role in helping us plan for the return to our pre-COVID revenue numbers. They have done a great job helping us build out revenue projections, which has helped with our staffing plans. They also provided guidance on PPP programs to help fill the revenue gap as volume improves. HAP now provides us with current weekly volume comparisons to pre-COVID dates so we can see how we are progressing in our return to previous numbers. They have also shared data so we can compare our rebound to that of our peers. This benchmarking is critical to revenue planning.


The Imaging Wire: When you made the decision to change RCM partners, you underwent a very detailed evaluation of the market and available options. After your experience with HAP these last two years, what recommendations do you have for groups that are beginning the process of considering an alternative to their current RCM set up?


Darlene: Any radiology practice considering RCM partners should prepare a detailed Q&A for their RFP so that they ask the same questions to each company. It’s also important to request involvement in the process from the people you will work with day to day.

Practices need to decide what they are looking for and make sure they are comfortable that their selection can provide it. We wanted a partner that would function as “our” billing department with dedicated staff to handle Triad. We are very pleased with the decision that we made and the great job that HAP is doing for us.



The Imaging Wire: Rebecca, you said that HAP acts as a true partner to your clients, not just a vendor. What does partnership mean to you?

Rebecca: Partnership is a two-way street. It is a relationship that is mutually beneficial and supportive, setting both parties up for success. It means stepping up and doing what you need to do to help, not because you have to, but because it is the right thing to do. Our client’s success is our success.


About Triad Radiology Associates:

Triad Radiology has supported the Piedmont Triad, North Carolina area with high-quality imaging and radiology services for over 50 years. Triad’s 45 diagnostic and interventional radiologists, state-of-the-art technology, and patient-centric approach assure that its patients can get the care they need and get back to the important things in life.

About Healthcare Administrative Partners:

Healthcare Administrative Partners (HAP) empowers hospital-employed and privately owned radiology groups to maximize revenue and minimize compliance risks despite the challenges of a complex, changing healthcare economy. HAP goes beyond billing services, delivering the clinical analytics, practice management, and specialized coding expertise needed to fully optimize revenue cycles. Since 1995, radiologists have turned to HAP as a trusted educator and true business partner.

Imaging Wire Q&A: Quantifying Riverain Technologies ClearRead CT

With Professor Thomas Frauenfelder
Deputy Director of Diagnostic and Interventional Radiology
University Hospital of Zurich

It says a lot when a solution works so well for a radiology department that they decide to perform a study to quantify its benefits. That is exactly what happened at the University Hospital of Zurich (USZ): USZ set up a study on the clinical and workflow benefits of Riverain™ Technologies ClearRead™ CT after implementing the solution into its chest CT workflow.

In this Imaging Wire Q&A, we sat down with Professor Thomas Frauenfelder, Deputy Director of Diagnostic and Interventional Radiology at USZ, to discuss how ClearRead CT improved his team’s chest CT reading performance. The study they performed quantified efficiency and accuracy along with key observations to aid other radiology teams looking to bring new CAD solutions into their workflows.




The Imaging Wire: Tell us about your team and how you handle Chest CT reading volume?

Professor Frauenfelder: The Institute of Diagnostic and Interventional Radiology at the University Hospital of Zurich consists of about eighteen staff radiologists and twenty residents. Last year we performed around 35,000 CT scans, 40% of which were chest CTs. For reading, we mainly use a standard PACS system.

Since we do not have a lung cancer screening program, most CT scans are related to either trauma, vascular pathologies, tumor diagnosis and follow-up, or interstitial lung diseases. During daytime shifts, about three staff radiologists read up to 70 CT scans.


The Imaging Wire: Why did you start using ClearRead CT and how do you use it?

Professor Frauenfelder: Several years ago, we evaluated a number of applications for lung nodule detection. Although many applications had a very high detection rate, we seldom used them because our radiologists were forced to open a second application just to see the results. Even then, it was common that when our radiologists opened the second application, the cases had not been read by the system.

The advantage of ClearRead CT is that it sends the “nodule-only” images back into the PACS, where they can be reviewed side by side with the “normal” lung window by forming specific hanging protocols. Our radiologists liked this type of display because they were able to stay in the system and quickly get an overview of possible lung nodules.


The Imaging Wire: Is that what inspired you to perform your study?

Professor Frauenfelder: We found that radiologists were able to review cases much more efficiently and safely with this type of display, especially the young residents. Since there was limited scientific data on the use of the software, we decided to conduct a study to confirm ClearRead CT accuracy and efficiency.

For the study, we created vessel-suppressed reconstructions of 100 patients’ contrast-enhanced chest CTs using ClearRead CT. The two sets of images were read by two groups of three radiologists, finding that vessel-suppressed CTs had 21% greater nodule detection rates, much higher interreader-agreement rates, and about 20% shorter average read times.


The Imaging Wire: What were the most compelling takeaways?

Professor Frauenfelder: Well, we expected that the results would be in favor of ClearRead CT concerning the detection rate and reading time, but it was surprising that the advantages were so significant.


The Imaging Wire: What was your experience with respect to ClearRead CT’s ease of installation and integration into the workflow?

Professor Frauenfelder: ClearRead CT was very easy to install for our ICT. The advantage is that we can adapt many parameters on our own, especially if CT protocols are changing. This gives us a lot of flexibility.

Because all post-processed images are directly stored into the PACS, they are accessible without changing applications. This saves a lot of time. We can also access the results in more detail by using the Web interface, if needed.

Overall, it keeps workflow running very smoothly.


The Imaging Wire: Based on your research and experience with ClearRead CT, what do you see as the most important qualities to look for in a CAD product?

Professor Frauenfelder: Well, many products today are very accurate for the depiction of pulmonary nodules. Some might be too sensitive. Since we do not have a lung cancer screening program, it is important that the system fits into our existing workflow and that it assists the radiologist by providing a nodule-specific recommendation about follow-up. Furthermore, the results should be easily transferable into reports.


The Imaging Wire: Do you have experience with any other ClearRead applications (e.g., ClearRead Xray| Bone Suppress) and if so, can you share about the other ClearRead applications you’ve used?

Professor Frauenfelder: We also use ClearRead Xray with both bone suppression and image enhancement. Our first impression is that ClearRead Xray helps us see pathologies more clearly and more accurately. ClearRead Xray installation and workflow were also very easy, and we’ve benefited from being able to integrate the images in specific hanging protocols on our existing PACS review station.

We actually also performed a study evaluating the use of ClearRead Xray for COVID-19 diagnosis that we’ll publish in the future. In the retrospective study, we evaluated the diagnostic accuracy of conventional radiography (CXR) and enhanced CXR (eCXR/ClearRead Xray) for the detection and quantification of disease-extent in COVID-19 patients compared to chest CT. Our initial findings show that the use of ClearRead Xray increases interreader agreement and has a higher sensitivity for the detection of the consolidation. So it seems that ClearRead Xray improves the detection of COVID-like pneumonia. However, further analysis is needed.


About Professor Frauenfelder. About Professor Frauenfelder. Thomas Frauenfelder is a professor of radiology at the University Hospital of Zurich (USZ), as well as its head of chest imaging, and deputy director of the Institute for Diagnostic and Interventional Radiology. He has a special interest in medical imaging and architecture of PACS in the hospital environment.

Imaging Wire Q&A: Qure.ai and MEDNAX Validate AI in the Wild

As the number of available imaging AI algorithms grow each month, the ability to truly validate a model’s performance and use that validation to enhance its clinical and operational performance has arguably become more important than the study-based accuracy claims that had everyone so impressed just a few years ago.

You could say that we’re at the “prove it and improve it” phase of the imaging AI adoption curve, which is what makes Qure.ai’s recent algorithm validation partnership with MEDNAX and vRad so interesting – and so important.

In this Imaging Wire Q&A, we sat down with Chiranjiv Singh, Qure.ai’s Chief Commercial Officer; Brian Baker, vRAD’s Director of Software Engineering; and Imad Nijim, MEDNAX Radiology and vRad’s CIO, to discuss the origins and results of their efforts to validate Qure.ai’s qER solution “in the wild.” Here it is:



The Imaging Wire: How did Qure.ai and MEDNAX come to work together?

Brian Baker: To explore the Qure.ai and MEDNAX partnership’s establishment, a quick history of the MEDNAX AI incubator is important. MEDNAX has been working with AI partners since 2015 in various forms with the primary goal of improving patient care. Qure.ai was one of the earlier partners in that process. Before the incubator was officially launched in 2018, Qure.ai was already collaborating on advanced solutions.

One important thing we bring to these AI partnerships is our massive and diverse data. MEDNAX Radiology Solutions has 2,000-plus facilities in all 50 states. We have radiologists all across the country reading over 7.2 million studies on the MEDNAX Imaging Platform. We have an enormous, heterogeneous data set. The data is not only representative of a very diverse population, but also a very diverse set of modality models, configurations, and protocols.

My primary focus for AI at MEDNAX Radiology Solutions is first and foremost patient care – helping patients is our number one goal. But also important, we want to foster a community of AI partners and use models from those partners in the real world. A big part of that is building models and/or validating models.

Qure.ai came to us with models already built on different data sets. They didn’t need our data set to perform additional model training, but they wanted to do real world validations to ensure their models and solutions were generalizing well in the field against an extremely large and diverse cohort of patients.

That is where the relationship blossomed. Our partnership first focused on the complex aspects of how we see different use cases from a clinical standpoint; we very much align on both use cases and pathologies; this alignment is a critical step for everyone – AI vendors and AI users in radiology alike. The clinical nuances to using a model in production are incredibly intricate, and Qure.ai and MEDNAX’s convergence in this area is a large part of our success.


Chiranjiv Singh: From our inception as a company, there was a clear emphasis that Qure.ai as a brand has to stand for proving the applicability of our product in a real-world context. And, for us to make a significant impact for our customers and their patients, the results have to be highly measurable. This implies that our solutions need to be extensively tested and credible at every level. To achieve this degree of validation requires a high volume and variety of independent data sets and it also needed us to expose our algorithm to rigorous test conditions.

That is where our strategic goals aligned with MEDNAX’s goals – and, together, with the MEDNAX team, we started calling this validation exercise “testing in the wild.” The Qure.ai team saw the value of partnering with someone of MEDNAX’s size and caliber to drive the variety, volume and rigor to help us validate every aspect of our solution. Without us leveraging the scale and the volumes of MEDNAX, we would never been able to achieve it in that short period of time unless we worked with roughly 100 different hospitals in the U.S.

What made the partnership stronger was the caliber of the MEDNAX team and the overall platform that they provided for us to jointly learn and improve. And, for these reasons, a very strategic alignment came about for both our teams, jointly working to make this “validation in the wild” a successful project for us both.


Brian: I believe only half the problem is proving your sensitivity and specificity with a large, diverse patient cohort. That is obviously extremely important for clinical and ethical reasons, but the other part of the problem to solve is figuring out how to ensure that a solution or model works on all the various types of DICOM in the industry. At MEDNAX Radiology Solutions, we see everything in DICOM that you can imagine and some you would not believe. That might be anything from slightly weirdly-formed DICOM to data in non-standard fields where it shouldn’t be or secondary captures or other images inside of the study, down to all the protocols involved in imaging (how the scan is actually acquired). With our scale and diversity of data, a model that can operate without erroring and crashing through a single night is an engineering feat on its own.



The Imaging Wire: Brian, can you share about the test, the results, and takeaways?

Brian: We’ve taken Qure.ai’s container-based solution that includes the AI models and plugged it in MEDNAX Radiology Solutions’ own inference engine. In our inference engine, image studies flow to models/solutions that are in a validation run in nearly the same way that the models/solutions will run if they successfully pass our validation. The major difference is that during validation, the results of the models do not initiate any action in the MEDNAX Imaging Platform – instead we just gather the data.

As imaging studies flow through the inference engine, we capture the results along with the results of Natural Language Processing (NLP) models run against our clinical reports (from radiologists). This allows us to very quickly determine how a model is doing at MEDNAX scale. We compare the NLP results to the Image AI results and have a very good understanding of how the model is performing within the MEDNAX Imaging Platform.

My team monitors all models on a continuous basis. For models being validated, this data is what makes up the core basis of our validation process. For models that have already been validated, this continuous monitoring ensures that models remain within approved thresholds – if a model successfully goes through our validation process and is approved by clinical leadership, it is important that the model continues to operate with the same sensitivity and specificity. If for any reason the data changes (patient demographic makeup, image acquisition changes, etc.) and the model no longer performs to our standards, we are alerted and remove that model from usage on the clinical platform.

For a validation run, we typically run a model for two weeks, and then capture those two weeks of data for further evaluation. The Qure.ai model has been running for several months to make sure it is hardened and successful. There were 300,000 studies that passed through when we looked in October. While the validation set is only 2 weeks of data, Qure.ai’s model held a consistent sensitivity and specificity throughout the process of integration.

For the validation evaluation, we built a validation document for Qure.ai that explores not only sensitivity and specificity against various NLP definitions, but also smaller hand-reviewed sub-cohorts as well as added analysis focused on sex and age breakdowns.



The Imaging Wire: What were some of the key takeaways for Qure.ai in terms of validation and learning about how your models performed “in the wild?”

Chiranjiv: We learned a great deal as a result of going through this process. A lot of work went into the back-end R&D process – in terms of relooking at our data science models and engineering analysis and really pinpointing where the weak points are and where the model can potentially break down. Our team was able to use the feedback and look at real clinical cases to fix these shortcomings and test them again with constant feedback models coming in through MEDNAX. This has made our solution more accurate, our predictive analytics sharper and our engineering ability far stronger than when we started out. Having the ability to go through the exercise of assessing 300,000 exams in a performance evaluation is a powerful proving ground. We are confidently sharing this with our customers by pointing out the fact that “the accuracy or performance of a model is only one part of fulfilling the promise of making AI real.”

The way the MEDNAX Imaging Platform is set, it’s like getting nearly real time, or live feedback on potential areas of error, improving the model and seeing your false positives and false negatives reduce with every round of testing. We learned so much looking at the variety of data, different kinds of DICOMs, incorrect DICOM tags, diverse acquisition protocols, every possible CT manufacturer, varying slice thicknesses, etc. Even though we had a lot of that before this partnership, this experience gave us an opportunity to bring stronger products to the market.

The next step for us is to share this with our potential customers and leverage this partnership to further spread the word that “making AI real” is not just about algorithm accuracy. Yes, accuracy is a critical piece – but if for example, you’re not beating speed requirements (like those vRad and MEDNAX Radiology Solutions had) then there is no point to take 10 minutes to read a CT when the entire turnaround is less than 10 minutes.

As a result of this partnership, we have made significant strides in our journey from innovative data models to working AI products. The Qure.ai team now has the ability and the confidence that, if any large client wants to deploy “AI in the real world,” we have the expertise and experience in handling the kind of volume and variety that we would have never experienced without working with vRad and MEDNAX Radiology Solutions.



The Imaging Wire: Many in the AI research community highlight a need for multi-center prospective studies, what role do you think this type of partnership can play in the absence of these studies or as a contributor to these studies?

Brian: I view MEDNAX Radiology Solutions’ role in the AI community as a mandate to help companies such as Qure.ai run large multi-center validations. Often, the community at large views this type of validation as important due to the diverse population of patients. And while I agree that is incredibly important, it is worth noting that it is also important to validate against various DICOM implementations and image study acquisition parameters.


Imad Nijim: There is obviously a lot of research going into this and the academics are very active with this work. For us, a big focus is on the real-life implications of this, and there was really hard work on both sides. One of the first steps was defining intracranial hemorrhage, and MEDNAX and Qure.ai had different definitions that they had to reconcile. They had to dig into the minutiae of their definitions and their results went into the AI model and imaging model that they built together.


Chiranjiv: This was not a validation study with one institution that has a standard protocol, defined patient profile, limited device inputs, etc. This is the fastest and closest you get to a multi-center study as the exams are coming from 100’s of different medical facilities across the country. MEDNAX gave us the ability to validate the algorithm with a diverse data set, different user settings, equipment types and all the other variability that a multi-center study would offer.



The Imaging Wire: Do you have any final thoughts on this partnership?

Chiranjiv: During this experience there was clear alignment on identifying the end value. We both realized that this project is not just about improving accuracy. If this is done well, it will influence decisions that directly impact patient lives. Most of the clinical cases involved CT scans being read as part of night services for medical facilities across the U.S. Many of these facilities, especially the smaller community-based hospitals, may not have experts to read these exams, especially during late-night hours. Our team had the context that if we do all this hard work to get the engineering, accuracy, and clinical definitions right, it positively impacts the patient. We can be the catalyst that makes the difference for that one patient. That has to be the north star. And this vision was what aligned Qure.ai and MEDNAX in the first place –and it’s what drove us to really get this right.


Imad: People that focus on the technology aspect of AI will get tripped up. The questions that people need to ask are: What problem are they solving? What workflow are they optimizing? What condition are they trying to create a positive outcome for? These are the questions that we need to ask and then back into it with the technology component. It sounds simple, but a lot of people don’t understand that and it’s a big differentiator between the successful and unsuccessful companies.


Nominations Open for First Annual Imaging Wire Awards

San Diego, California – October 7, 2019 – The Imaging Wire today announced that nominations are open for the first annual Imaging Wire Awards, honoring 2019’s most outstanding contributors to radiology practice and outcomes.

The Imaging Wire Awards will be presented to five imaging professionals for achievements in the following categories:

  • Insights to Action: recognizes efforts to reduce unnecessary imaging
  • Diagnostic Humanitarian: for achievements supporting equality in patient care, in the U.S. or internationally
  • AI Activator: recognizes actions to use artificial intelligence to improve patient care
  • Burnout Fighter: for addressing inefficient work practices that lead to physician burnout
  • Cornerstone: honoring non-physicians for outstanding contributions to the practice of radiology

Those interested in applying or nominating a colleague for one of the above Imaging Wire Awards can do so until November 8th through this link. Winners will be selected by a panel of industry leaders and recognized at RSNA 2019 in Chicago, Illinois.

The 2019 Imaging Wire Awards judges committee includes:

  • Bill Algee, CRA, FAHRA – Columbus Regional Hospital
  • Keith J. Dreyer, DO, PhD, FACR, FSIIM – Partners Healthcare
  • Terence A.S. Matalon, MD, FACR, FSIR – Einstein Healthcare Network
  • Jonathan Messinger, MD – Baptist Health South Florida
  • Pooja Rao, MBBS, PhD – Qure.ai
  • Irena Tocino, MD, FACR – Yale University
  • Syed Furqan Zaidi MD, MBA – Radiology Partners


About The Imaging Wire

The Imaging Wire is a newsletter and website dedicated to making it easy for the people of medical imaging to be well informed about their specialty and industry. Read twice weekly by thousands of global radiology professionals, The Imaging Wire is the first publication from business news company, Insight Links, which is dedicated to expanding news literacy across healthcare. For more information: https://theimagingwire.com/

Imaging Wire Q&A: Qure.ai’s Stroke Solution

I met Dr. Pooja Rao last year through a very revealing email exchange. I sent Pooja a note to share some recent Qure.ai coverage and invite her to subscribe and she responded with a series of questions about the tools we use to automate this type of outreach. It was at that moment that I realized Dr. Rao is uniquely solutions oriented.

As Qure.ai’s co-founder and Head of R&D, Pooja is usually focused on solving far more important issues than email automation, using her background in medicine, data science, and neuroscience to make healthcare more accessible and affordable through deep learning.

In this first-ever Imaging Wire Q&A, we sat down with Pooja to discuss the current challenges in stroke and head trauma treatment and how AI solutions, such as Qure.ai’s qER product, stand to improve clinical outcomes. Here it is:


What drew Qure.ai to stroke and head trauma AI?

Pooja Rao: Stroke is one of the leading causes of death and long-term disability worldwide. Patient outcomes depend strongly on how quickly stroke is diagnosed and treated, measured as ‘symptom onset-to-needle’ time.

Most patients with a stroke go through an accelerated stroke protocol that includes rapid imaging and review, but there are many others with brain bleeds (stroke-related or otherwise) who are outside of this protocol. For example, a patient who’s already in the hospital for an ischemic stroke, but gets an acute bleed during treatment. That’s where you need AI that works in the background to pick up these scans and prioritize the right patients.

Over 2.5 million people suffer head injuries in the U.S. every year. A fraction of those will require urgent neurosurgical intervention – and imaging is key to making that decision. The use of CT scans in the emergency room has been on the rise for decades, which means that radiologists in turn have long lists of ‘STAT’ scans to review. If AI could scan through these and push the critical ones to the top of the list it would save a lot of valuable time for these patients.


What are the current stroke and trauma guidelines and how does AI fit in?

Pooja Rao: The 2018 American Stroke Association/American Heart Association (AHA/ASA) stroke guidelines say that non-contrast CT provides the information needed to make decisions about acute stroke management in most cases. They also say that the primary role of a head CT scan for patients with stroke symptoms is to rule out a bleed, and that there is no evidence for making treatment decisions based on the subtle CT signs of ischemia.

Further, they advocate for using non-contrast CTs to screen patients because it’s cost-effective. This means that radiologists’ head CT volume continues to grow. High-volume practices can have as many as 20 head CTs an hour in addition to all the other studies they read. Simply flagging critical scans would add a lot of value here.

Stroke centers are also required to score intracranial bleeds by volume. This is another area that AI can save time for radiologists, by marking out brain hemorrhage and measuring its volume.


What has research revealed about the performance of AI solutions for stroke and head trauma?

Pooja Rao: Standalone studies show that the technology works well and is safe and effective enough to be used in clinical practice, and it sounds like regulatory bodies agree, given the recent clearance of AI products to triage critical scans and assist radiologists.

Our own study, published last year in The Lancet showed that qER accurately detects not only bleeds but also other critical head CT scan abnormalities like mass effect (sometimes the only early sign of a tumor), midline shift, and cranial fractures.


What about in clinical use?

Pooja Rao: As we deploy at more hospitals and imaging centers, we’re generating evidence that AI works just as well in the clinical setting as it does in the lab. In addition to proving that the technology generalizes well (performs with high accuracy independent of the CT scanner model or population), we’re also quantifying the clinical benefit to patients, radiologists, and other physicians. When we evaluate the benefits of AI for stroke and head trauma we look at:

  • How much time is saved when critical scans are prioritized by AI?
  • How does this prioritization impact other studies on the worklist?
  • How are patient outcomes impacted?



Where is head trauma and stroke AI being adopted first and who’s finding it most beneficial?

Pooja Rao: There is a lot of AI research coming out of academic centers, where quality of care is the highest and there’s an abundance of the best and brightest doctors. But care and radiology standards aren’t uniform across the world, or even within the U.S.

We’re seeing that the earliest serious AI adopters are community hospitals and remotely located healthcare providers where there may not be reliable, accurate 24×7 radiologist coverage. It also seems that geographies with a shortage of expert care are taking the lead in adopting AI, reflecting where value is truly being added.

Of course, there is still a long way to go and there are a lot of questions that need answers. Is the role of AI to prevent tired doctors from missing critical findings, help save time dictating reports, or to prioritize critical scans on busy worklists? Is it all three?


And how are these solutions benefiting patients and radiologists?

Pooja Rao: For patients, a lot of the benefit of AI is access – just having access to rapid, accurate diagnosis and treatment, and not having to wait hours in the ER.

For radiologists, the benefits of AI differ based on the setting in which they operate. Busy urban practices or teleradiology setups benefit the most from having critical cases automatically flagged for review. Many radiologists also like having bleeds and midline shifts quantified because it saves them time. In places where radiologist coverage is sparse, radiologists and other clinicians find the mobile phone alerts with non-diagnostic preview images particularly useful.

These are exactly the patient and radiologist benefits we targeted with qER.


What’s the next frontier for head trauma and stroke AI?

Pooja Rao: Everyone wants algorithms that can be superhuman and see abnormalities that radiologists can’t, but there are easier problems to solve first.

One of these is incorporating clinical knowledge. In studies that we’ve done, we’ve observed that radiologists are at their most accurate when provided the full clinical context. We’re now training AI to incorporate that clinical context.

Another one is predicting long-term outcomes. qER already measures the volume of the abnormalities it detects to help study progression in patients with traumatic brain injury. We’re now going beyond quantification and progression monitoring to using these measures to predict patient outcomes.


Thank you, Pooja. It’s exciting to watch Qure.ai work with global healthcare providers to address serious conditions like stroke and tuberculosis and we can’t wait to see what’s next.


Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!