TY - JOUR
T1 - Gesture spotting and recognition for human-robot interaction
AU - Yang, Hee Deok
AU - Park, A. Yeon
AU - Lee, Seong Whan
N1 - Funding Information:
Manuscript received March 16, 2006; revised September 10, 2006. This paper was recommended for publication by Associate Editor H. Zhuang and Editor K. Lynch upon evaluation of the reviewers’ comments. This work was supported by the Intelligent Robotics Development Program, one of the 21st Century Frontier R&D Programs funded by the Ministry of Commerce, Industry and Energy of Korea. This paper was presented in part at the 7th IEEE International Conference on Automatic Face and Gesture Recognition, Southampton, U.K., April 2006.
PY - 2007/4
Y1 - 2007/4
N2 - Visual interpretation of gestures can be useful in accomplishing natural human-robot interaction (HRI). Previous HRI research focused on issues such as hand gestures, sign language, and command gesture recognition. Automatic recognition of whole-body gestures is required in order for HRI to operate naturally. This presents a challenging problem, because describing and modeling meaningful gesture patterns from whole-body gestures is a complex task. This paper presents a new method for recognition of whole-body key gestures in HRI. A human subject is first described by a set of features, encoding the angular relationship between a dozen body parts in 3-D. A feature vector is then mapped to a codeword of hidden Markov models. In order to spot key gestures accurately, a sophisticated method of designing a transition gesture model is proposed. To reduce the states of the transition gesture model, model reduction which merges similar states based on data-dependent statistics and relative entropy is used. The experimental results demonstrate that the proposed method can be efficient and effective in HRI, for automatic recognition of whole-body key gestures from motion sequences.
AB - Visual interpretation of gestures can be useful in accomplishing natural human-robot interaction (HRI). Previous HRI research focused on issues such as hand gestures, sign language, and command gesture recognition. Automatic recognition of whole-body gestures is required in order for HRI to operate naturally. This presents a challenging problem, because describing and modeling meaningful gesture patterns from whole-body gestures is a complex task. This paper presents a new method for recognition of whole-body key gestures in HRI. A human subject is first described by a set of features, encoding the angular relationship between a dozen body parts in 3-D. A feature vector is then mapped to a codeword of hidden Markov models. In order to spot key gestures accurately, a sophisticated method of designing a transition gesture model is proposed. To reduce the states of the transition gesture model, model reduction which merges similar states based on data-dependent statistics and relative entropy is used. The experimental results demonstrate that the proposed method can be efficient and effective in HRI, for automatic recognition of whole-body key gestures from motion sequences.
KW - Gesture spotting
KW - Hidden Markov model (HMM)
KW - Human-robot interaction (HRI)
KW - Mobile robot
KW - Transition gesture model
KW - Whole-body gesture recognition
UR - http://www.scopus.com/inward/record.url?scp=34247223015&partnerID=8YFLogxK
U2 - 10.1109/TRO.2006.889491
DO - 10.1109/TRO.2006.889491
M3 - Article
AN - SCOPUS:34247223015
SN - 1552-3098
VL - 23
SP - 256
EP - 270
JO - IEEE Transactions on Robotics
JF - IEEE Transactions on Robotics
IS - 2
ER -