Counterfactual explanation generation with minimal feature boundary
D You, S Niu, S Dong, H Yan, Z Chen, D Wu, L Shen… - Information …, 2023 - Elsevier
The complex and opaque decision-making process of machine learning models restrains
the interpretability of predictions and makes them cannot mine results outside of learning
experiences. The causality between features and the target variable can be traced by
injecting counterfactual explanations into the prediction model and generating
counterfactual instances using adjusted features to reverse the prediction results. Existing
algorithms, such as Diverse Counterfactual Explanations (DiCE) and Counterfactual …
the interpretability of predictions and makes them cannot mine results outside of learning
experiences. The causality between features and the target variable can be traced by
injecting counterfactual explanations into the prediction model and generating
counterfactual instances using adjusted features to reverse the prediction results. Existing
algorithms, such as Diverse Counterfactual Explanations (DiCE) and Counterfactual …
The complex and opaque decision-making process of machine learning models restrains the interpretability of predictions and makes them cannot mine results outside of learning experiences. The causality between features and the target variable can be traced by injecting counterfactual explanations into the prediction model and generating counterfactual instances using adjusted features to reverse the prediction results. Existing algorithms, such as Diverse Counterfactual Explanations (DiCE) and Counterfactual Explanations Guided by Prototypes (Proto), can generate multiple/single counterfactual (s) for a data point by global optimization in the range of all-out features to gain on a local decision range. However, these methods cannot clearly identify which features are the key causes. Moreover, a Random Forest Optimal Counterfactual Set Extractor (RF-OCSE) extracts counterfactual sets from a random forest and needs to manipulate all the internal nodes of the tree, restricting it to only tree ensemble models. To address the above shortcomings, we proposed a Counterfactual Explanation Generation method with the Minimal Feature Boundary (MFB), named (CEG MFB). The proposed CEG MFB algorithm consists of two stages: 1) mining the MFB, which can reverse the prediction results to restrain the generation range of counterfactual instances, and 2) constructing a counterfactual generative method for generating counterfactual instances within the MFB to realize the minimum reversing cost. To evaluate its performance, we compared the proposed CEG MFB algorithm with six baseline algorithms on 16 datasets and conducted a case study in a real scenario. The results indicate that the proposed CEG MFB algorithm outperforms the compared algorithms.
Elsevier
Showing the best result for this search. See all results