Biased manifold learning for view invariant body pose estimation

    Research output: Contribution to journalArticlepeer-review

    1 Citation (Scopus)

    Abstract

    In human body pose estimation, manifold learning has been considered as a useful method with regard to reducing the dimension of 2D images and 3D body configuration data. Most commonly, body pose is estimated from silhouettes derived from images or image sequences. A major problem in applying manifold estimation to pose estimation is its vulnerability to silhouette variation caused by changes of factors such as viewpoint, person, and distance. In this paper, we propose a novel approach that combines three separate manifolds for viewpoint, pose, and 3D body configuration focusing on the problem of viewpoint-induced silhouette variation. The biased manifold learning is used to learn these manifolds with appropriately weighted distances. The proposed method requires four mapping functions that are learned by a generalized regression neural network for robustness. Despite the use of only three manifolds, experimental results show that the proposed method can reliably estimate 3D body poses from 2D images with all learned viewpoints.

    Original languageEnglish
    Article number1250058
    JournalInternational Journal of Wavelets, Multiresolution and Information Processing
    Volume10
    Issue number6
    DOIs
    Publication statusPublished - 2012 Nov

    Bibliographical note

    Funding Information:
    This work was supported by WCU (World Class University) program through the National Research Foundation of Korea funded by the Ministry of Education, Science and Technology, under Grant R31-10008.

    Keywords

    • 3D pose estimation
    • manifold learning
    • nonlinear dimensionality reduction

    ASJC Scopus subject areas

    • Signal Processing
    • Information Systems
    • Applied Mathematics

    Fingerprint

    Dive into the research topics of 'Biased manifold learning for view invariant body pose estimation'. Together they form a unique fingerprint.

    Cite this