Abstract
Decoding EEG signals for imagined speech is a challenging task due to the high-dimensional nature of the data and low signal-to-noise ratio. In recent years, denoising diffusion probabilistic models (DDPMs) have emerged as promising approaches for representation learning in various domains. Our study proposes a novel method for decoding EEG signals for imagined speech using DDPMs and a conditional autoencoder named Diff-E. Results indicate that Diff-E significantly improves the accuracy of decoding EEG signals for imagined speech compared to traditional machine learning techniques and baseline models. Our findings suggest that DDPMs can be an effective tool for EEG signal decoding, with potential implications for the development of brain-computer interfaces that enable communication through imagined speech.
| Original language | English |
|---|---|
| Pages (from-to) | 1159-1163 |
| Number of pages | 5 |
| Journal | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH |
| Volume | 2023-August |
| DOIs | |
| Publication status | Published - 2023 |
| Event | 24rd Annual Conference of the International Speech Communication Association, INTERSPEECH 2023 - Dublin, Ireland Duration: 2023 Aug 20 → 2023 Aug 24 |
Bibliographical note
Publisher Copyright:© 2023 International Speech Communication Association. All rights reserved.
Keywords
- brain-computer interface
- electroencephalography
- imagined speech
- silent communication
- speech recognition
ASJC Scopus subject areas
- Software
- Signal Processing
- Language and Linguistics
- Modelling and Simulation
- Human-Computer Interaction
Fingerprint
Dive into the research topics of 'Diff-E: Diffusion-based Learning for Decoding Imagined Speech EEG'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS