[PDF][PDF] How to explain individual classification decisions

D Baehrens, T Schroeter, S Harmeling… - The Journal of Machine …, 2010 - jmlr.org
D Baehrens, T Schroeter, S Harmeling, M Kawanabe, K Hansen, KR Müller
The Journal of Machine Learning Research, 2010jmlr.org
After building a classifier with modern tools of machine learning we typically have a black
box at hand that is able to predict well for unseen data. Thus, we get an answer to the
question what is the most likely label of a given unseen data point. However, most methods
will provide no answer why the model predicted a particular label for a single instance and
what features were most influential for that particular instance. The only method that is
currently able to provide such explanations are decision trees. This paper proposes a …
Abstract
After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted a particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.
jmlr.org
Showing the best result for this search. See all results