EEG Representations of Spatial and Temporal Features in Imagined Speech and Overt Speech

Seo Hyun Lee, Minji Lee, Seong Whan Lee

Research output: Chapter in Book/Report/Conference proceedingConference contribution

14 Citations (Scopus)


Imagined speech is an emerging paradigm for intuitive control of the brain-computer interface based communication system. Although the decoding performance of the imagined speech is improving with actively proposed architectures, the fundamental question about ‘what component are they decoding?’ is still remaining as a question mark. Considering that the imagined speech refers to an internal mechanism of producing speech, it may naturally resemble the distinct features of the overt speech. In this paper, we investigate the close relation of the spatial and temporal features between imagined speech and overt speech using electroencephalography signals. Based on the common spatial pattern feature, we acquired 16.2% and 59.9% of averaged thirteen-class classification accuracy (chance rate = 7.7%) for imagined speech and overt speech, respectively. Although the overt speech showed significantly higher classification performance compared to the imagined speech, we found potentially similar common spatial pattern of the identical classes of imagined speech and overt speech. Furthermore, in the temporal feature, we examined the analogous grand averaged potentials of the highly distinguished classes in the two speech paradigms. Specifically, the correlation of the amplitude between the imagined speech and the overt speech was 0.71 in the class with the highest true positive rate. The similar spatial and temporal features of the two paradigms may provide a key to the bottom-up decoding of imagined speech, implying the possibility of robust classification of multiclass imagined speech. It could be a milestone to comprehensive decoding of the speech-related paradigms, considering their underlying patterns.

Original languageEnglish
Title of host publicationPattern Recognition - 5th Asian Conference, ACPR 2019, Revised Selected Papers
EditorsShivakumara Palaiahnakote, Gabriella Sanniti di Baja, Liang Wang, Wei Qi Yan
Number of pages14
ISBN (Print)9783030412982
Publication statusPublished - 2020
Event5th Asian Conference on Pattern Recognition, ACPR 2019 - Auckland, New Zealand
Duration: 2019 Nov 262019 Nov 29

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12047 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference5th Asian Conference on Pattern Recognition, ACPR 2019
Country/TerritoryNew Zealand

Bibliographical note

Funding Information:
Acknowledgements. This work was supported by the Institute for Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (No. 2017-0-00451; Development of BCI based Brain and Cognitive Computing Technology for Recognizing User’s Intentions using Deep Learning). The authors thank D.-K. Han for the useful discussion of the data analysis.

Publisher Copyright:
© 2020, Springer Nature Switzerland AG.


  • Brain-computer interface
  • Common spatial pattern
  • Electroencephalography
  • Imagined speech
  • Overt speech

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science


Dive into the research topics of 'EEG Representations of Spatial and Temporal Features in Imagined Speech and Overt Speech'. Together they form a unique fingerprint.

Cite this