Abstract
After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted a particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.
Original language | English |
---|---|
Pages (from-to) | 1803-1831 |
Number of pages | 29 |
Journal | Journal of Machine Learning Research |
Volume | 11 |
Publication status | Published - 2010 Jun |
Keywords
- Ames mutagenicity
- Black box model
- Explaining
- Kernel methods
- Nonlinear
ASJC Scopus subject areas
- Software
- Control and Systems Engineering
- Statistics and Probability
- Artificial Intelligence