TY - GEN
T1 - The POETICON enacted scenario corpus A tool for human and computational experiments on action understanding
AU - Wallraven, Christian
AU - Schultze, Michael
AU - Mohler, Betty
AU - Vatakis, Argiro
AU - Pastra, Katerina
PY - 2011
Y1 - 2011
N2 - A good data corpus lies at the heart of progress in both perceptual/cognitive science and in computer vision. While there are a few datasets that deal with simple actions, creating a realistic corpus for complex, long action sequences that contains also human-human interactions has so far not been attempted to our knowledge. Here, we introduce such a corpus for (inter)action understanding that contains six everyday scenarios taking place in a kitchen / living-room setting. Each scenario was acted out several times by different pairs of actors and contains simple object interactions as well as spoken dialogue. In addition, each scenario was first recorded with several HD cameras and also with motion-capturing of the actors and several key objects. Having access to the motion capture data allows not only for kinematic analyses, but also allows for the production of realistic animations where all aspects of the scenario can be fully controlled. We also present results from a first series of perceptual experiments that show how humans are able to infer scenario classes, as well as individual actions and objects from computer animations of everyday situations. These results can serve as a benchmark for future computational approaches that begin to take on complex action understanding.
AB - A good data corpus lies at the heart of progress in both perceptual/cognitive science and in computer vision. While there are a few datasets that deal with simple actions, creating a realistic corpus for complex, long action sequences that contains also human-human interactions has so far not been attempted to our knowledge. Here, we introduce such a corpus for (inter)action understanding that contains six everyday scenarios taking place in a kitchen / living-room setting. Each scenario was acted out several times by different pairs of actors and contains simple object interactions as well as spoken dialogue. In addition, each scenario was first recorded with several HD cameras and also with motion-capturing of the actors and several key objects. Having access to the motion capture data allows not only for kinematic analyses, but also allows for the production of realistic animations where all aspects of the scenario can be fully controlled. We also present results from a first series of perceptual experiments that show how humans are able to infer scenario classes, as well as individual actions and objects from computer animations of everyday situations. These results can serve as a benchmark for future computational approaches that begin to take on complex action understanding.
UR - http://www.scopus.com/inward/record.url?scp=79958722724&partnerID=8YFLogxK
U2 - 10.1109/FG.2011.5771446
DO - 10.1109/FG.2011.5771446
M3 - Conference contribution
AN - SCOPUS:79958722724
SN - 9781424491407
T3 - 2011 IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, FG 2011
SP - 484
EP - 491
BT - 2011 IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, FG 2011
T2 - 2011 IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, FG 2011
Y2 - 21 March 2011 through 25 March 2011
ER -