The entropy of the articulatory phonological code: Recognizing gestures from tract variables

Xiaodan Zhuang, Hosung Nam, Mark Hasegawa-Johnson, Louis Goldstein, Elliot Saltzman

Research output: Contribution to journalConference articlepeer-review

9 Citations (Scopus)

Abstract

We propose an instantaneous "gestural pattern vector" to encode the instantaneous pattern of gesture activations across tract variables in the gestural score. The design of these gestural pattern vectors is the first step towards an automatic speech recognizer motivated by articulatory phonology, which is expected to be more invariant to speech coarticulation and reduction than conventional speech recognizers built with the sequence-of-phones assumption. We use a tandem model to recover the instantaneous gestural pattern vectors from tract variable time functions in local time windows, and achieve classification accuracy up to 84.5% for synthesized data from one speaker. Recognizing all gestural pattern vectors is equivalent to recognizing the ensemble of gestures. This result suggests that the proposed gestural pattern vector might be a viable unit in statistical models for speech recognition.

Original languageEnglish
Pages (from-to)1489-1492
Number of pages4
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Publication statusPublished - 2008
Externally publishedYes
EventINTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association - Brisbane, QLD, Australia
Duration: 2008 Sept 222008 Sept 26

Keywords

  • Artificial neural network
  • Gaussian mixture model
  • Speech gesture
  • Speech production
  • Tandem model

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Signal Processing
  • Software
  • Sensory Systems

Fingerprint

Dive into the research topics of 'The entropy of the articulatory phonological code: Recognizing gestures from tract variables'. Together they form a unique fingerprint.

Cite this