An integrated VR platform for 3D and image based models: A step toward interactive image based virtual environments

Jayoung Yoon, Gerard Jounghyun Kim

Research output: Contribution to journalConference articlepeer-review

Abstract

Traditionally, three dimension models have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity, it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined, traversed, and rendered together. In fact, as suggested by Shade et al. [1], these different representations can be used as different LOD's for a given object. For instance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range, and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform: designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection, handling their transitions, implementing appropriate interaction schemes, and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit1, to accommodate new node types for environment maps, billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also, during interaction, regardless of the viewing distance, a 3D representation would be used, if it exists. Before rendering, objects are conservatively culled from the view frustum using the representation with the largest volume. Finally, we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

Original languageEnglish
Pages (from-to)9-16
Number of pages8
JournalProceedings of SPIE - The International Society for Optical Engineering
Volume4756
DOIs
Publication statusPublished - 2002
Externally publishedYes
EventThird International Conference on Virtual Reality and Its Application in Industry - Hangzhou, China
Duration: 2002 Apr 92002 Apr 12

Keywords

  • Depth perception
  • Image based models
  • Interaction
  • Mixed rendering
  • Scene graph

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics
  • Computer Science Applications
  • Applied Mathematics
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'An integrated VR platform for 3D and image based models: A step toward interactive image based virtual environments'. Together they form a unique fingerprint.

Cite this