Explaining and Interpreting LSTMs

Leila Arras, José Arjona-Medina, Michael Widrich, Grégoire Montavon, Michael Gillhofer, Klaus Robert Müller, Sepp Hochreiter, Wojciech Samek

    Research output: Chapter in Book/Report/Conference proceedingChapter

    70 Citations (Scopus)

    Abstract

    While neural networks have acted as a strong unifying force in the design of modern AI systems, the neural network architectures themselves remain highly heterogeneous due to the variety of tasks to be solved. In this chapter, we explore how to adapt the Layer-wise Relevance Propagation (LRP) technique used for explaining the predictions of feed-forward networks to the LSTM architecture used for sequential data modeling and forecasting. The special accumulators and gated interactions present in the LSTM require both a new propagation scheme and an extension of the underlying theoretical framework to deliver faithful explanations.

    Original languageEnglish
    Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    PublisherSpringer Verlag
    Pages211-238
    Number of pages28
    DOIs
    Publication statusPublished - 2019

    Publication series

    NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    Volume11700 LNCS
    ISSN (Print)0302-9743
    ISSN (Electronic)1611-3349

    Bibliographical note

    Publisher Copyright:
    © Springer Nature Switzerland AG 2019.

    Keywords

    • Explainable artificial intelligence
    • Interpretability
    • LSTM
    • Model transparency
    • Recurrent neural networks

    ASJC Scopus subject areas

    • Theoretical Computer Science
    • General Computer Science

    Fingerprint

    Dive into the research topics of 'Explaining and Interpreting LSTMs'. Together they form a unique fingerprint.

    Cite this