Advancing Post-Hoc Case-Based Explanation with Feature Highlighting
Advancing Post-Hoc Case-Based Explanation with Feature Highlighting
Eoin M. Kenny, Eoin Delaney, Mark T. Keane
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 427-435.
https://doi.org/10.24963/ijcai.2023/48
Explainable AI (XAI) has been proposed as a valuable tool to assist in downstream tasks involving human-AI collaboration. Perhaps the most psychologically valid XAI techniques are case-based approaches which display "whole" exemplars to explain the predictions of black-box AI systems. However, for such post-hoc XAI methods dealing with images, there has been no attempt to improve their scope by using multiple clear feature "parts" of the images to explain the predictions while linking back to relevant cases in the training data, thus allowing for more comprehensive explanations that are faithful to the underlying model. Here, we address this gap by proposing two general algorithms (latent and superpixel-based) which can isolate multiple clear feature parts in a test image, and then connect them to the explanatory cases found in the training data, before testing their effectiveness in a carefully designed user study. Results demonstrate that the proposed approach appropriately calibrates a user's feelings of "correctness" for ambiguous classifications in real world data on the ImageNet dataset, an effect which does not happen when just showing the explanation without feature highlighting.
Keywords:
AI Ethics, Trust, Fairness: ETF: Explainability and interpretability
Knowledge Representation and Reasoning: KRR: Case-based reasoning