TY - GEN
T1 - View-independent human action recognition based on a stereo camera
AU - Roh, Myung Cheol
AU - Shin, Ho Keun
AU - Lee, Seong Whan
N1 - Funding Information:
This work was supported by the Korea Science and Engineering Foundation (KOSEF) grant funded by the Korea government (MEST) (No. 2009-0060113 ).
Copyright:
Copyright 2010 Elsevier B.V., All rights reserved.
PY - 2009
Y1 - 2009
N2 - Vision-based human action recognition provides an advanced interface, and research in the field of human action recognition has been actively carried out. However, an environment from dynamic viewpoint, where we can be in any position, any direction, etc., must be considered in our living 3D space. In order to overcome the viewpoint dependency, we propose a Volume Motion Template(VMT) and Projected Motion Template (PMT). The proposed VMT method is an extension of the Motion History Image (MHI) method to 3D space. The PMT is generated by projecting the VMT into a 2D plane that is orthogonal to an optimal virtual viewpoint where the optimal virtual viewpoint is a viewpoint from which an action can be described in greatest detail, in 2D space. From the proposed method, any actions taken from different viewpoints can be recognized independent of the viewpoints. The experimental results demonstrate the accuracies and effectiveness of the proposed VMT method for view-independent human action recognition.
AB - Vision-based human action recognition provides an advanced interface, and research in the field of human action recognition has been actively carried out. However, an environment from dynamic viewpoint, where we can be in any position, any direction, etc., must be considered in our living 3D space. In order to overcome the viewpoint dependency, we propose a Volume Motion Template(VMT) and Projected Motion Template (PMT). The proposed VMT method is an extension of the Motion History Image (MHI) method to 3D space. The PMT is generated by projecting the VMT into a 2D plane that is orthogonal to an optimal virtual viewpoint where the optimal virtual viewpoint is a viewpoint from which an action can be described in greatest detail, in 2D space. From the proposed method, any actions taken from different viewpoints can be recognized independent of the viewpoints. The experimental results demonstrate the accuracies and effectiveness of the proposed VMT method for view-independent human action recognition.
KW - Human action recognition
KW - Motion history image
KW - View-independence
KW - Volume motion template
UR - http://www.scopus.com/inward/record.url?scp=74549134124&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=74549134124&partnerID=8YFLogxK
U2 - 10.1109/CCPR.2009.5343991
DO - 10.1109/CCPR.2009.5343991
M3 - Conference contribution
AN - SCOPUS:74549134124
SN - 9781424441990
T3 - Proceedings of the 2009 Chinese Conference on Pattern Recognition, CCPR 2009, and the 1st CJK Joint Workshop on Pattern Recognition, CJKPR
SP - 832
EP - 836
BT - Proceedings of the 2009 Chinese Conference on Pattern Recognition, CCPR 2009, and the 1st CJK Joint Workshop on Pattern Recognition, CJKPR
T2 - 2009 Chinese Conference on Pattern Recognition, CCPR 2009 and the 1st CJK Joint Workshop on Pattern Recognition, CJKPR
Y2 - 4 November 2009 through 6 November 2009
ER -