TY - JOUR
T1 - Subject-Independent Brain-Computer Interfaces Based on Deep Convolutional Neural Networks
AU - Kwon, O. Yeon
AU - Lee, Min Ho
AU - Guan, Cuntai
AU - Lee, Seong Whan
N1 - Funding Information:
Manuscript received July 20, 2018; revised January 6, 2019, May 6, 2019, and August 24, 2019; accepted October 3, 2019. Date of publication November 13, 2019; date of current version October 6, 2020. This work was supported in part by Institute for Information and Communications Technology Planning and Evaluation (IITP) Grant funded by the Korea Government (MSIT) (Development of Intelligent Pattern Recognition Softwares for Ambulatory Brain-Computer Interface) under Grant 2015-0-00185 and (Development of BCI based Brain and Cognitive Computing Technology for Recognizing User‘s Intentions using Deep Learning) under Grant 2017-0-00451 and in part by the Samsung Research Funding Center of Sam-sung Electronics under Project SRFC-TC1603-02. (Corresponding author: Seong-Whan Lee.) O-Y. Kwon is with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, South Korea (e-mail: data0311@gmail.com).
Publisher Copyright:
© 2012 IEEE.
PY - 2020/10
Y1 - 2020/10
N2 - For a brain-computer interface (BCI) system, a calibration procedure is required for each individual user before he/she can use the BCI. This procedure requires approximately 20-30 min to collect enough data to build a reliable decoder. It is, therefore, an interesting topic to build a calibration-free, or subject-independent, BCI. In this article, we construct a large motor imagery (MI)-based electroencephalography (EEG) database and propose a subject-independent framework based on deep convolutional neural networks (CNNs). The database is composed of 54 subjects performing the left-and right-hand MI on two different days, resulting in 21 600 trials for the MI task. In our framework, we formulated the discriminative feature representation as a combination of the spectral-spatial input embedding the diversity of the EEG signals, as well as a feature representation learned from the CNN through a fusion technique that integrates a variety of discriminative brain signal patterns. To generate spectral-spatial inputs, we first consider the discriminative frequency bands in an information-theoretic observation model that measures the power of the features in two classes. From discriminative frequency bands, spectral-spatial inputs that include the unique characteristics of brain signal patterns are generated and then transformed into a covariance matrix as the input to the CNN. In the process of feature representations, spectral-spatial inputs are individually trained through the CNN and then combined by a concatenation fusion technique. In this article, we demonstrate that the classification accuracy of our subject-independent (or calibration-free) model outperforms that of subject-dependent models using various methods [common spatial pattern (CSP), common spatiospectral pattern (CSSP), filter bank CSP (FBCSP), and Bayesian spatio-spectral filter optimization (BSSFO)].
AB - For a brain-computer interface (BCI) system, a calibration procedure is required for each individual user before he/she can use the BCI. This procedure requires approximately 20-30 min to collect enough data to build a reliable decoder. It is, therefore, an interesting topic to build a calibration-free, or subject-independent, BCI. In this article, we construct a large motor imagery (MI)-based electroencephalography (EEG) database and propose a subject-independent framework based on deep convolutional neural networks (CNNs). The database is composed of 54 subjects performing the left-and right-hand MI on two different days, resulting in 21 600 trials for the MI task. In our framework, we formulated the discriminative feature representation as a combination of the spectral-spatial input embedding the diversity of the EEG signals, as well as a feature representation learned from the CNN through a fusion technique that integrates a variety of discriminative brain signal patterns. To generate spectral-spatial inputs, we first consider the discriminative frequency bands in an information-theoretic observation model that measures the power of the features in two classes. From discriminative frequency bands, spectral-spatial inputs that include the unique characteristics of brain signal patterns are generated and then transformed into a covariance matrix as the input to the CNN. In the process of feature representations, spectral-spatial inputs are individually trained through the CNN and then combined by a concatenation fusion technique. In this article, we demonstrate that the classification accuracy of our subject-independent (or calibration-free) model outperforms that of subject-dependent models using various methods [common spatial pattern (CSP), common spatiospectral pattern (CSSP), filter bank CSP (FBCSP), and Bayesian spatio-spectral filter optimization (BSSFO)].
KW - Brain-computer interface (BCI)
KW - convolutional neural networks (CNNs)
KW - deep learning (DL)
KW - electroencephalography (EEG)
KW - motor imagery (MI)
KW - subject-independent
UR - http://www.scopus.com/inward/record.url?scp=85092679904&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2019.2946869
DO - 10.1109/TNNLS.2019.2946869
M3 - Article
C2 - 31725394
AN - SCOPUS:85092679904
SN - 2162-237X
VL - 31
SP - 3839
EP - 3852
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 10
M1 - 8897723
ER -