Abstract
Recently, deep learning has been advancing the state of the art in artificial intelligence to a new level, and humans rely on artificial intelligence techniques more than ever. However, even with such unprecedented advancements, the lack of explanation regarding the decisions made by deep learning models and absence of control over their internal processes act as major drawbacks in critical decision-making processes, such as precision medicine and law enforcement. In response, efforts are being made to make deep learning interpretable and controllable by humans. This article reviews visual analytics, information visualization, and machine learning perspectives relevant to this aim, and discusses potential challenges and future research directions.
Original language | English |
---|---|
Pages (from-to) | 84-92 |
Number of pages | 9 |
Journal | IEEE Computer Graphics and Applications |
Volume | 38 |
Issue number | 4 |
DOIs | |
Publication status | Published - 2018 Jul 1 |
Bibliographical note
Funding Information:We greatly appreciate the feedback from anonymous reviewers. This work was partially supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. NRF-2016R1C1B2015924) and National NSF of China (No. 61672308). Any opinions, findings, and conclusions or recommendations expressed here are those of the authors and do not necessarily reflect the views of funding agencies.
Publisher Copyright:
© 1981-2012 IEEE.
Keywords
- computer graphics
- deep learning
- explainable deep learning
- interactive visualization
ASJC Scopus subject areas
- Software
- Computer Graphics and Computer-Aided Design