The sign language is composed of two categories of signals: manual signals such as signs and fingerspellings and non-manual ones such as body gestures and facial expressions. This paper proposes a new method for recognizing manual signals and facial expressions as non-manual signals. The proposed method involves the following three steps: First, a hierarchical conditional random field is used to detect candidate segments of manual signals. Second, the BoostMap embedding method is used to verify hand shapes of segmented signs and to recognize fingerspellings. Finally, the support vector machine is used to recognize facial expressions as non-manual signals. This final step is taken when there is some ambiguity in the previous two steps. The experimental results indicate that the proposed method can accurately recognize the sign language at an 84% rate based on utterance data.
Bibliographical noteFunding Information:
This work was supported by the World Class University Program through the National Research Foundation of Korea funded by the Ministry of Education, Science, and Technology , under Grant R31–10008 . This work was also supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MEST) (No. 2009–0086841 ).
- BoostMap embedding
- Conditional random field
- Sign language recognition
- Support vector machine
ASJC Scopus subject areas
- Signal Processing
- Computer Vision and Pattern Recognition
- Artificial Intelligence