Articulatory information for noise robust speech recognition

  • Vikramjit Mitra*
  • , Hosung Nam
  • , Carol Y. Espy-Wilson
  • , Elliot Saltzman
  • , Louis Goldstein
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

60 Citations (Scopus)

Abstract

Prior research has shown that articulatory information, if extracted properly from the speech signal, can improve the performance of automatic speech recognition systems. However, such information is not readily available in the signal. The challenge posed by the estimation of articulatory information from speech acoustics has led to a new line of research known as "acoustic-to- articulatory inversion" or "speech-inversion." While most of the research in this area has focused on estimating articulatory information more accurately, few have explored ways to apply this information in speech recognition tasks. In this paper, we first estimated articulatory information in the form of vocal tract constriction variables (abbreviated as TVs) from the Aurora-2 speech corpus using a neural network based speech-inversion model. Word recognition tasks were then performed for both noisy and clean speech using articulatory information in conjunction with traditional acoustic features. Our results indicate that incorporating TVs can significantly improve word recognition rates when used in conjunction with traditional acoustic features.

Original languageEnglish
Article number5677601
Pages (from-to)1913-1924
Number of pages12
JournalIEEE Transactions on Audio, Speech and Language Processing
Volume19
Issue number7
DOIs
Publication statusPublished - 2011
Externally publishedYes

Bibliographical note

Funding Information:
Manuscript received March 15, 2010; revised July 06, 2010 and October 12, 2010; accepted December 08, 2010. Date of publication December 30, 2010; date of current version July 15, 2011. This work was supported in part by the National Science Foundation under Grants IIS0703859, IIS-0703048, and IIS0703782. V. Mitra and H. Nam contributed equally to this work. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Nestor Becerra Yoma.

Keywords

  • Articulatory phonology
  • articulatory speech recognition
  • artificial neural networks (ANNs)
  • noise-robust speech recognition
  • speech inversion
  • task dynamic model
  • vocal-tract variables

ASJC Scopus subject areas

  • Acoustics and Ultrasonics
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Articulatory information for noise robust speech recognition'. Together they form a unique fingerprint.

Cite this