The LRP toolbox for artificial neural networks

Sebastian Lapuschkin, Alexander Binder, Grégoire Montavon, Klaus Robert Müller, Wojciech Samek

Research output: Contribution to journalArticlepeer-review

100 Citations (Scopus)


The Layer-wise Relevance Propagation (LRP) algorithm explains a classifier's prediction specific to a given data point by attributing relevance scores to important components of the input by using the topology of the learned model itself. With the LRP Toolbox we provide platform-agnostic implementations for explaining the predictions of pre-trained state of the art Caffe networks and stand-alone implementations for fully connected Neural Network models. The implementations for Matlab and python shall serve as a playing field to familiarize oneself with the LRP algorithm and are implemented with readability and transparency in mind. Models and data can be imported and exported using raw text formats, Matlab's .mat files and the .npy format for numpy or plain text.

Original languageEnglish
Pages (from-to)1-5
Number of pages5
JournalJournal of Machine Learning Research
Publication statusPublished - 2016 Jun 1

Bibliographical note

Publisher Copyright:
©2016 Sebastian Lapuschkin and Alexander Binder and Grégoire Montavon and Klaus-Robert Müller and Wojciech Samek.


  • Artificial neural networks
  • Computer vision
  • Deep learning
  • Explaining classifiers
  • Layer-wise relevance propagation

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence


Dive into the research topics of 'The LRP toolbox for artificial neural networks'. Together they form a unique fingerprint.

Cite this