TY - JOUR
T1 - Visual influence on path integration in darkness indicates a multimodal representation of large-scale space
AU - Tcheang, Lili
AU - Bülthoff, Heinrich H.
AU - Burgess, Neil
PY - 2011/1/18
Y1 - 2011/1/18
N2 - Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map.
AB - Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map.
UR - http://www.scopus.com/inward/record.url?scp=79551641197&partnerID=8YFLogxK
U2 - 10.1073/pnas.1011843108
DO - 10.1073/pnas.1011843108
M3 - Article
C2 - 21199934
AN - SCOPUS:79551641197
SN - 0027-8424
VL - 108
SP - 1152
EP - 1157
JO - Proceedings of the National Academy of Sciences of the United States of America
JF - Proceedings of the National Academy of Sciences of the United States of America
IS - 3
ER -