A team of Duke University radiologists and computer engineers unveiled a new mammography AI platform that could be an important step towards developing truly interpretable AI.
Explainable History – Healthcare leaders have been calling for explainable imaging AI for some time, but explainability efforts have been mainly limited to saliency / heat maps that show what part of an image influenced a model’s prediction (not how or why).
Duke’s Interpretable Model – Duke’s new AI platform analyzes mammography exams for potentially cancerous lesions to help physicians determine if a patient should receive a biopsy, while supporting its predictions with image and case-based explanations.
Training Interpretability – The Duke team trained their AI platform to locate and evaluate lesions following a process that human radiology educators and students would utilize:
- First, they trained the AI model to detect suspicious lesions and to ignore healthy tissues
- Then they had radiologists label the edges of the lesions
- Then they trained the model to compare those lesion edges with lesion edges from an archive of images with confirmed outcomes
Interpretable Predictions – This training process allowed the AI model to identify suspicious lesions, highlight the classification-relevant parts of the image, and explain its findings by referencing confirmed images.
Interpretable Results – Like many AI models, this early version could not identify cancerous lesions as accurately as human radiologists. However, it matched the performance of existing “black box” AI systems and the team was able to see why their AI model made its mistakes.
The Takeaway
It seems like concerns over AI performance are growing at about the same pace as actual AI adoption, making explainability / interpretability increasingly important. Duke’s interpretable AI platform might be in its early stages, but its use of previous cases to explain findings seems like a promising (and straightforward) way to achieve that goal, while improving diagnosis in the process.