Adaptive occlusion state estimation for human pose tracking under self-occlusions

Nam Gyu Cho, Alan L. Yuille, Seong Whan Lee

Research output: Contribution to journalArticlepeer-review

36 Citations (Scopus)


Tracking human poses in video can be considered as the process of inferring the positions of the body joints. Among various obstacles to this task, one of the most challenging is to deal with 'self-occlusion?, where one body part occludes another one. In order to tackle this problem, a model must represent the self-occlusion between different body parts which leads to complex inference problems. In this paper, we propose a method that estimates occlusion states adaptively. A Markov random field is used to represent the occlusion relationship between human body parts in terms an occlusion state variable, which represents the depth order. To ensure efficient computation, inference is divided into two steps: a body pose inference step and an occlusion state inference step. We test our method using video sequences from the HumanEva dataset. We label the data to quantify how the relative depth ordering of parts, and hence the self-occlusion, changes during the video sequence. Then we demonstrate that our method can successfully track human poses even when there are frequent occlusion changes. We compare our approach to alternative methods including the state of the art approach which use multiple cameras.

Original languageEnglish
Pages (from-to)649-661
Number of pages13
JournalPattern Recognition
Issue number3
Publication statusPublished - 2013 Mar


  • 3D human pose tracking
  • Computer vision
  • Self-occlusion

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence


Dive into the research topics of 'Adaptive occlusion state estimation for human pose tracking under self-occlusions'. Together they form a unique fingerprint.

Cite this