Abstract
In brain–computer interfaces, imagined speech is one of the most promising paradigms due to its intuitiveness and direct communication. However, it is challenging to decode an imagined speech EEG, because of its complicated underlying cognitive processes, resulting in complex spectro-spatio-temporal patterns. In this work, we propose a novel convolutional neural network structure for representing such complex patterns and identifying an intended imagined speech. The proposed network exploits two feature extraction flows for learning richer class-discriminative information. Specifically, our proposed network is composed of a spatial filtering path and a temporal structure learning path running in parallel, then integrates their output features for decision-making. We demonstrated the validity of our proposed method on a publicly available dataset by achieving state-of-the-art performance. Furthermore, we analyzed our network to show that our method learns neurophysiologically plausible patterns.
Original language | English |
---|---|
Title of host publication | Pattern Recognition - 6th Asian Conference, ACPR 2021, Revised Selected Papers |
Editors | Christian Wallraven, Qingshan Liu, Hajime Nagahara |
Publisher | Springer Science and Business Media Deutschland GmbH |
Pages | 335-346 |
Number of pages | 12 |
ISBN (Print) | 9783031024436 |
DOIs | |
Publication status | Published - 2022 |
Event | 6th Asian Conference on Pattern Recognition, ACPR 2021 - Virtual, Online Duration: 2021 Nov 9 → 2021 Nov 12 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 13189 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | 6th Asian Conference on Pattern Recognition, ACPR 2021 |
---|---|
City | Virtual, Online |
Period | 21/11/9 → 21/11/12 |
Bibliographical note
Funding Information:This work was supported by Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea government under Grant 2017-0-00451 (Development of BCI based Brain and Cognitive Computing Technology for Recognizing User’s Intentions using Deep Learning) and Grant 2019-0-00079 (Department of Artificial Intelligence, Korea University).
Publisher Copyright:
© 2022, Springer Nature Switzerland AG.
Keywords
- Brain–computer interface
- Convolutional neural network
- Electroencephalogram
- Imagined speech
ASJC Scopus subject areas
- Theoretical Computer Science
- General Computer Science