Abstract
Effective fusion of acoustic and visual modalities in speech recognition has been an important issue in Human Computer Interfaces, warranting further improvements in intelligibility and robustness. Speaker lip motion stands out as the most linguistically relevant visual feature for speech recognition. In this paper, we present a new hybrid approach to improve lip localization and tracking, aimed at improving speech recognition in noisy environments. This hybrid approach begins with a new color space transformation for enhancing lip segmentation. In the color space transformation, a PCA method is employed to derive a new one dimensional color space which maximizes discrimination between lip and non-lip colors. Intensity information is also incorporated in the process to improve contrast of upper and corner lip segments. In the subsequent step, a constrained deformable lip model with high flexibility is constructed to accurately capture and track lip shapes. The model requires only six degrees of freedom, yet provides a precise description of lip shapes using a simple least square fitting method. Experimental results indicate that the proposed hybrid approach delivers reliable and accurate localization and tracking of lip motions under various measurement conditions.
Original language | English |
---|---|
Pages | 90-93 |
Number of pages | 4 |
DOIs | |
Publication status | Published - 2008 |
Event | 2008 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI - Seoul, Korea, Republic of Duration: 2008 Aug 20 → 2008 Aug 22 |
Other
Other | 2008 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI |
---|---|
Country/Territory | Korea, Republic of |
City | Seoul |
Period | 08/8/20 → 08/8/22 |
ASJC Scopus subject areas
- Control and Systems Engineering
- Software
- Computer Science Applications