Combining sensory information to improve visualization

Marc Ernst, Martin Banks, Felix Wichmann, Laurence Maloney, Heinrich Buelthoff

Research output: Contribution to conferencePaperpeer-review


Seemingly effortlessly the human brain reconstructs the three-dimensional environment surrounding us from the light pattern striking the eyes. This seems to be true across almost all viewing and lighting conditions. One important factor for this apparent easiness is the redundancy of information provided by the sensory organs. For example, perspective distortions, shading, motion parallax, or the disparity between the two eyesí images are all, at least partly, redundant signals which provide us with information about the three-dimensional layout of the visual scene. Our brain uses all these different sensory signals and combines the available information into a coherent percept. In displays visualizing data, however, the information is often highly reduced and abstracted, which may lead to an altered perception and therefore a misinterpretation of the visualized data. In this panel we will discuss mechanisms involved in the combination of sensory information and their implications for simulations using computer displays, as well as problems resulting from current display technology such as cathode-ray tubes.

Original languageEnglish
Number of pages4
Publication statusPublished - 2002
EventVIS 2002, IEEE Visualisation 2002 - Boston, MA, United States
Duration: 2002 Oct 272002 Nov 1


OtherVIS 2002, IEEE Visualisation 2002
Country/TerritoryUnited States
CityBoston, MA

ASJC Scopus subject areas

  • Software
  • General Computer Science
  • General Engineering
  • Computer Graphics and Computer-Aided Design


Dive into the research topics of 'Combining sensory information to improve visualization'. Together they form a unique fingerprint.

Cite this