TY - JOUR
T1 - Matching and searching for moving faces
AU - Pilz, Karin
AU - Thornton, Ian M.
AU - Bülthoff, Heinrich H.
N1 - Copyright:
Copyright 2004 Elsevier B.V., All rights reserved.
PY - 2003
Y1 - 2003
N2 - Human faces are dynamic objects. Recently, we have been using a number of novel tasks to explore the visual systems' sensitivity to these complex, moving stimuli. For example, Thornton & Kourtzi (2002) used an immediate matching task to show that moving primes (video clips) were better cues to identity than static primes (still images); Knappmeyer, Thornton & Bülthoff (2002) used motion capture and computer animation techniques to demonstrate that incidentally learned patterns of characteristic motion could bias the perception of identity when spatial morphs were used to reduce the saliency of form cues. Here, we present two sets of experiments which exploit a new database of high-quality digital video sequences captured from 5, temporally-synchronized cameras (Kleiner, Wallraven & Bülthoff, 2002). In the first series of experiments, we examined whether a dynamic matching advantage for identity decisions would generalize across viewpoint. We found that a) dynamic primes led to a small but reliable (24ms) overall matching advantage compared to static primes; b) matching speed with dynamic primes was unaffected by view direction (left or right) or viewing angle (0, 22, 45 degrees) c) static primes were not only slower, but were also more dependent on view direction and viewing angle. These results suggest that the additional information provided by the dynamic primes is able to compensate to some extent for viewpoint mismatches. In the second series of experiments, we examined visual search for expression singletons using arrays of moving faces. Our initial results indicate that search for faces can be much more efficient (15 ms/item) than previous studies using static images would suggest. Furthermore, as expression search using the same dynamic arrays turned upside down proved to be much harder (50 ms/item), it would appear that the observed upright performance is face related, rather than relying on low level static or dynamic cues.
AB - Human faces are dynamic objects. Recently, we have been using a number of novel tasks to explore the visual systems' sensitivity to these complex, moving stimuli. For example, Thornton & Kourtzi (2002) used an immediate matching task to show that moving primes (video clips) were better cues to identity than static primes (still images); Knappmeyer, Thornton & Bülthoff (2002) used motion capture and computer animation techniques to demonstrate that incidentally learned patterns of characteristic motion could bias the perception of identity when spatial morphs were used to reduce the saliency of form cues. Here, we present two sets of experiments which exploit a new database of high-quality digital video sequences captured from 5, temporally-synchronized cameras (Kleiner, Wallraven & Bülthoff, 2002). In the first series of experiments, we examined whether a dynamic matching advantage for identity decisions would generalize across viewpoint. We found that a) dynamic primes led to a small but reliable (24ms) overall matching advantage compared to static primes; b) matching speed with dynamic primes was unaffected by view direction (left or right) or viewing angle (0, 22, 45 degrees) c) static primes were not only slower, but were also more dependent on view direction and viewing angle. These results suggest that the additional information provided by the dynamic primes is able to compensate to some extent for viewpoint mismatches. In the second series of experiments, we examined visual search for expression singletons using arrays of moving faces. Our initial results indicate that search for faces can be much more efficient (15 ms/item) than previous studies using static images would suggest. Furthermore, as expression search using the same dynamic arrays turned upside down proved to be much harder (50 ms/item), it would appear that the observed upright performance is face related, rather than relying on low level static or dynamic cues.
UR - http://www.scopus.com/inward/record.url?scp=4243157498&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=4243157498&partnerID=8YFLogxK
U2 - 10.1167/3.9.820
DO - 10.1167/3.9.820
M3 - Article
AN - SCOPUS:4243157498
SN - 1534-7362
VL - 3
SP - 820a
JO - Journal of Vision
JF - Journal of Vision
IS - 9
ER -