Explainable Time-Series Prediction Using a Residual Network and Gradient-Based Methods

Hyojung Choi, Chanhwi Jung, Taein Kang, Hyunwoo J. Kim, Il Youp Kwak

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Researchers are employing deep learning (DL) in many fields, and the scope of its application is expanding. However, because understanding the rationale and validity of DL decisions is difficult, a DL model is occasionally called a black-box model. Here, we focus on a DL-based explainable time-series prediction model. We propose a model based on long short-term memory (LSTM) followed by a convolutional neural network (CNN) with a residual connection, referred to as the LSTM-resCNN. In comparison to one-dimensional CNN, bidirectional LSTM, CNN-LSTM, LSTM-CNN, and MTEX-CNN models, the proposed LSTM-resCNN performs best on the three datasets of fine dust (PM2.5), bike-sharing, and bitcoin. Additionally, we tested with Grad-CAM, Integrated Gradients, and Gradients, three gradient-based approaches for the model explainability. These gradient-based techniques combined very well with the LSTM-resCNN model. Variables and time lags that considerably influence the explainable time-series prediction can be identified and visualized using gradients and integrated gradients.

Original languageEnglish
Pages (from-to)108469-108482
Number of pages14
JournalIEEE Access
Volume10
DOIs
Publication statusPublished - 2022

Bibliographical note

Publisher Copyright:
© 2013 IEEE.

Keywords

  • Neural networks
  • recurrent neural networks
  • time series analysis

ASJC Scopus subject areas

  • General Computer Science
  • General Materials Science
  • General Engineering

Fingerprint

Dive into the research topics of 'Explainable Time-Series Prediction Using a Residual Network and Gradient-Based Methods'. Together they form a unique fingerprint.

Cite this