TY - GEN
T1 - Combination of manual and non-manual features for sign language recognition based on conditional random field and active appearance model
AU - Yang, Hee Deok
AU - Lee, Seong Whan
PY - 2011
Y1 - 2011
N2 - Sign language recognition is the task of detection and recognition of manual signals (MSs) and non-manual signals (NMSs) in a signed utterance. In this paper, a novel method for recognizing MS and facial expressions as a NMS is proposed. This is achieved through a framework consisting of three components: (1) Candidate segments of MSs are discriminated using an hierarchical conditional random field (CRF) and Boost-Map embedding. It can distinguish signs, fingerspellings and non-sign patterns, and is robust to the various sizes, scales and rotations of the signer's hand. (2) Facial expressions as a NMS are recognized with support vector machine (SVM) and active appearance model (AAM), AAM is used to extract facial feature points. From these facial feature points, several measurements are computed to distinguish each facial component into defined facial expressions with SVM. (3) Finally, the recognition results of MSs and NMSs are fused in order to recognize signed sentences. Experiments demonstrate that the proposed method can successfully combine MSs and NMSs features for recognizing signed sentences from utterance data.
AB - Sign language recognition is the task of detection and recognition of manual signals (MSs) and non-manual signals (NMSs) in a signed utterance. In this paper, a novel method for recognizing MS and facial expressions as a NMS is proposed. This is achieved through a framework consisting of three components: (1) Candidate segments of MSs are discriminated using an hierarchical conditional random field (CRF) and Boost-Map embedding. It can distinguish signs, fingerspellings and non-sign patterns, and is robust to the various sizes, scales and rotations of the signer's hand. (2) Facial expressions as a NMS are recognized with support vector machine (SVM) and active appearance model (AAM), AAM is used to extract facial feature points. From these facial feature points, several measurements are computed to distinguish each facial component into defined facial expressions with SVM. (3) Finally, the recognition results of MSs and NMSs are fused in order to recognize signed sentences. Experiments demonstrate that the proposed method can successfully combine MSs and NMSs features for recognizing signed sentences from utterance data.
KW - Sign language recognition
KW - active appearance model
KW - conditional random held
KW - manual sign
KW - non-manual sign
KW - support vector machine
UR - http://www.scopus.com/inward/record.url?scp=80155203162&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=80155203162&partnerID=8YFLogxK
U2 - 10.1109/ICMLC.2011.6016973
DO - 10.1109/ICMLC.2011.6016973
M3 - Conference contribution
AN - SCOPUS:80155203162
SN - 9781457703065
T3 - Proceedings - International Conference on Machine Learning and Cybernetics
SP - 1726
EP - 1731
BT - Proceedings of 2011 International Conference on Machine Learning and Cybernetics, ICMLC 2011
T2 - 2011 International Conference on Machine Learning and Cybernetics, ICMLC 2011
Y2 - 10 July 2011 through 13 July 2011
ER -