Abstract
Background In cognitive neuroscience the potential of deep neural networks (DNNs) for solving complex classification tasks is yet to be fully exploited. The most limiting factor is that DNNs as notorious ‘black boxes’ do not provide insight into neurophysiological phenomena underlying a decision. Layer-wise relevance propagation (LRP) has been introduced as a novel method to explain individual network decisions. New method We propose the application of DNNs with LRP for the first time for EEG data analysis. Through LRP the single-trial DNN decisions are transformed into heatmaps indicating each data point's relevance for the outcome of the decision. Results DNN achieves classification accuracies comparable to those of CSP-LDA. In subjects with low performance subject-to-subject transfer of trained DNNs can improve the results. The single-trial LRP heatmaps reveal neurophysiologically plausible patterns, resembling CSP-derived scalp maps. Critically, while CSP patterns represent class-wise aggregated information, LRP heatmaps pinpoint neural patterns to single time points in single trials. Comparison with existing method(s) We compare the classification performance of DNNs to that of linear CSP-LDA on two data sets related to motor-imaginary BCI. Conclusion We have demonstrated that DNN is a powerful non-linear tool for EEG analysis. With LRP a new quality of high-resolution assessment of neural activity can be reached. LRP is a potential remedy for the lack of interpretability of DNNs that has limited their utility in neuroscientific applications. The extreme specificity of the LRP-derived heatmaps opens up new avenues for investigating neural activity underlying complex perception or decision-related processes.
Original language | English |
---|---|
Pages (from-to) | 141-145 |
Number of pages | 5 |
Journal | Journal of Neuroscience Methods |
Volume | 274 |
DOIs | |
Publication status | Published - 2016 Dec 1 |
Bibliographical note
Funding Information:This work was supported by the Brain Korea 21 Plus Program and by the Deutsche Forschungsgemeinschaft (DFG). This publication only reflects the authors’ views. Funding agencies are not liable for any use that may be made of the information contained herein.
Publisher Copyright:
© 2016 Elsevier B.V.
Keywords
- Brain–computer interfacing
- Interpretability
- Neural networks
ASJC Scopus subject areas
- General Neuroscience