TY - GEN
T1 - Volume motion template for view-invariant gesture recognition
AU - Roh, Myung Cheol
AU - Shin, Ho Keun
AU - Lee, Sang Woong
AU - Lee, Seong Whan
PY - 2006
Y1 - 2006
N2 - The representation of gestures changes dynamically, depending on camera viewpoints. This camera viewpoints problem is difficult to solve in environments with a single directional camera, since the shape and motion information for representing gestures is different at different viewpoints. In view-based methods, data for each viewpoint is required, which is ineffective and ambiguous in recognizing gestures. In this paper, we propose a volume motion template (VMT) to overcome the viewpoint problem in a single-directional stereo camera environment. The VMT represents motion information in 3D space using disparity maps. Motion orientation is determined with 3D motion information. The projection of VMT at the optimal virtual viewpoint can be obtained by motion orientation. The proposed method is not only independent of variations of viewpoints, but also can represent depth motion. The proposed method has been evaluated in view-invariant representation and recognition using the gesture sequences which include parallel motion in an optical axis. The experimental results demonstrated the effectiveness of the proposed VMT for view-invariant gesture recognition.
AB - The representation of gestures changes dynamically, depending on camera viewpoints. This camera viewpoints problem is difficult to solve in environments with a single directional camera, since the shape and motion information for representing gestures is different at different viewpoints. In view-based methods, data for each viewpoint is required, which is ineffective and ambiguous in recognizing gestures. In this paper, we propose a volume motion template (VMT) to overcome the viewpoint problem in a single-directional stereo camera environment. The VMT represents motion information in 3D space using disparity maps. Motion orientation is determined with 3D motion information. The projection of VMT at the optimal virtual viewpoint can be obtained by motion orientation. The proposed method is not only independent of variations of viewpoints, but also can represent depth motion. The proposed method has been evaluated in view-invariant representation and recognition using the gesture sequences which include parallel motion in an optical axis. The experimental results demonstrated the effectiveness of the proposed VMT for view-invariant gesture recognition.
UR - http://www.scopus.com/inward/record.url?scp=34047223583&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=34047223583&partnerID=8YFLogxK
U2 - 10.1109/ICPR.2006.1183
DO - 10.1109/ICPR.2006.1183
M3 - Conference contribution
AN - SCOPUS:34047223583
SN - 0769525210
SN - 9780769525211
T3 - Proceedings - International Conference on Pattern Recognition
SP - 1229
EP - 1232
BT - Proceedings - 18th International Conference on Pattern Recognition, ICPR 2006
T2 - 18th International Conference on Pattern Recognition, ICPR 2006
Y2 - 20 August 2006 through 24 August 2006
ER -