Visual images synchronized with audio signals can provide user-friendly interface for man machine interactions. The visual speech can be represented as a sequence of visemes, which are the generic face images corresponding to particular sounds. We use HMMs (hidden Markov models) to convert audio signals to a sequence of visemes. In this paper, we compare two approaches in using HMMs. In the first approach, an HMM is trained for each triviseme which is a viseme with its left and right context, and the audio signals are directly recognized as a sequence of trivisemes. In the second approach, each triphone is modeled with an HMM, and a general triphone recognizer is used to produce a triphone sequence from the audio signals. The triviseme or triphone sequence is then converted to a viseme sequence. The performances of the two viseme recognition systems are evaluated on the TIMIT speech corpus.
|Title of host publication
|Intelligent Data Engineering and Automated Learning - IDEAL 2002 - 3rd International Conference, Proceedings
|Hujun Yin, Nigel Allinson, Richard Freeman, John Keane, Simon Hubbard
|Number of pages
|Published - 2002
|3rd International Conference on Intelligent Data Engineering and Automated Learning, IDEAL 2002 - Manchester, United Kingdom
Duration: 2002 Aug 12 → 2002 Aug 14
|Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
|3rd International Conference on Intelligent Data Engineering and Automated Learning, IDEAL 2002
|02/8/12 → 02/8/14
Bibliographical notePublisher Copyright:
© Springer-Verlag Berlin Heidelberg 2002.
ASJC Scopus subject areas
- Theoretical Computer Science
- General Computer Science