Abstract
In this article, we formulate the problem of estimating and selecting task-relevant temporal signal segments from a single electroencephalogram (EEG) trial in the form of a Markov decision process and propose a novel reinforcement-learning mechanism that can be combined with the existing deep-learning-based brain-computer interface methods. To be specific, we devise an actor-critic network such that an agent can determine which timepoints need to be used (informative) or discarded (uninformative) in composing the intention-related features in a given trial, and thus enhancing the intention identification performance. To validate the effectiveness of our proposed method, we conduct experiments with a publicly available big motor imagery (MI) dataset and apply our novel mechanism to various recent deep-learning architectures designed for MI classification. Based on the exhaustive experiments, we observe that our proposed method helped achieve statistically significant improvements in performance.
Original language | English |
---|---|
Pages (from-to) | 1873-1882 |
Number of pages | 10 |
Journal | IEEE Transactions on Industrial Informatics |
Volume | 18 |
Issue number | 3 |
DOIs | |
Publication status | Published - 2022 Mar 1 |
Keywords
- Brain-computer interface (BCI)
- deep learning
- electroencephalogram (EEG)
- motor imagery
- reinforcement learning (RL)
- subject independent
ASJC Scopus subject areas
- Control and Systems Engineering
- Information Systems
- Computer Science Applications
- Electrical and Electronic Engineering