Volume motion template for view-invariant gesture recognition

Myung Cheol Roh, Ho Keun Shin, Sang Woong Lee, Seong Whan Lee

Research output: Chapter in Book/Report/Conference proceedingConference contribution

20 Citations (Scopus)


The representation of gestures changes dynamically, depending on camera viewpoints. This camera viewpoints problem is difficult to solve in environments with a single directional camera, since the shape and motion information for representing gestures is different at different viewpoints. In view-based methods, data for each viewpoint is required, which is ineffective and ambiguous in recognizing gestures. In this paper, we propose a volume motion template (VMT) to overcome the viewpoint problem in a single-directional stereo camera environment. The VMT represents motion information in 3D space using disparity maps. Motion orientation is determined with 3D motion information. The projection of VMT at the optimal virtual viewpoint can be obtained by motion orientation. The proposed method is not only independent of variations of viewpoints, but also can represent depth motion. The proposed method has been evaluated in view-invariant representation and recognition using the gesture sequences which include parallel motion in an optical axis. The experimental results demonstrated the effectiveness of the proposed VMT for view-invariant gesture recognition.

Original languageEnglish
Title of host publicationProceedings - 18th International Conference on Pattern Recognition, ICPR 2006
Number of pages4
Publication statusPublished - 2006
Event18th International Conference on Pattern Recognition, ICPR 2006 - Hong Kong, China
Duration: 2006 Aug 202006 Aug 24

Publication series

NameProceedings - International Conference on Pattern Recognition
ISSN (Print)1051-4651


Other18th International Conference on Pattern Recognition, ICPR 2006
CityHong Kong

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition


Dive into the research topics of 'Volume motion template for view-invariant gesture recognition'. Together they form a unique fingerprint.

Cite this