Towards Voice Reconstruction from EEG during Imagined Speech

Young Eun Lee, Seo Hyun Lee, Sang Ho Kim, Seong Whan Lee

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Translating imagined speech from human brain activity into voice is a challenging and absorbing research issue that can provide new means of human communication via brain signals. Efforts to reconstruct speech from brain activity have shown their potential using invasive measures of spoken speech data, but have faced challenges in reconstructing imagined speech. In this paper, we propose NeuroTalk, which converts non-invasive brain signals of imagined speech into the user's own voice. Our model was trained with spoken speech EEG which was generalized to adapt to the domain of imagined speech, thus allowing natural correspondence between the imagined speech and the voice as a ground truth. In our framework, an automatic speech recognition decoder contributed to decomposing the phonemes of the generated speech, demonstrating the potential of voice reconstruction from unseen words. Our results imply the potential of speech synthesis from human EEG signals, not only from spoken speech but also from the brain signals of imagined speech.

Original languageEnglish
Title of host publicationAAAI-23 Technical Tracks 5
EditorsBrian Williams, Yiling Chen, Jennifer Neville
PublisherAAAI press
Pages6030-6038
Number of pages9
ISBN (Electronic)9781577358800
Publication statusPublished - 2023 Jun 27
Event37th AAAI Conference on Artificial Intelligence, AAAI 2023 - Washington, United States
Duration: 2023 Feb 72023 Feb 14

Publication series

NameProceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023
Volume37

Conference

Conference37th AAAI Conference on Artificial Intelligence, AAAI 2023
Country/TerritoryUnited States
CityWashington
Period23/2/723/2/14

Bibliographical note

Funding Information:
This work was supported by Institute for Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIT) (No.2021-0-02068, Artificial Intelligence Innovation Hub; No. 2017-0-00451, Development of BCI based Brain and Cognitive Computing Technology for Recognizing User’s Intentions using Deep Learning; No. 2019-0-00079, Artificial Intelligence Graduate School Program(Korea University)).

Publisher Copyright:
Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Towards Voice Reconstruction from EEG during Imagined Speech'. Together they form a unique fingerprint.

Cite this