A probabilistic regression model provides decision-makers with the regression output along with its quantitative uncertainty for given input variables. Even though this uncertainty may help to avoid serious consequences such as misdiagnosis or blackout due to overconfidence in the output, it only provides a measure of the uncertainty of the output, and has a limitation in that it cannot explain the reasons for the output and its uncertainty. If they can be presented along with their reasons, more suitable alternatives to the output can be found. However, despite the development of artificial intelligence methods to explain machine learning models and their outputs, few probabilistic regression models with this functionality have been proposed. Therefore, in this paper, we propose a variational autoencoder-based model for explainable probabilistic regression, called VAPER. VAPER provides a parametric probability distribution of an output variable over input variables and interprets it using layer-wise relevance propagation to investigate the effect of each input variable. To evaluate the effectiveness of the proposed model, we performed extensive experiments using several datasets. The experimental results demonstrated that VAPER has competitive regression performance compared to existing models, even with effective explainability.
Bibliographical noteFunding Information:
This work was supported by Korea Environment Industry & Technology Institute (KEITI) through the Exotic Invasive Species Management Program, funded by the Korea Ministry of Environment (MOE) ( 2021002280004 ), and in part by the Energy Cloud R&D Program (Grant number: 2019M3F2A1073184 ) through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT.
© 2022 Elsevier B.V.
- Explainable artificial intelligence
- Layer-wise relevance propagation
- Probabilistic regression
- Variational autoencoder
ASJC Scopus subject areas
- Theoretical Computer Science
- Computer Science(all)
- Modelling and Simulation