What happens if we put vision and touch into conflict? Which modality 'wins'? Although several previous studies have addressed this topic, they have solely focused on integration of vision and touch for low-level object properties (such as curvature, slant, or depth). In the present study, we introduce a multimodal mixed-reality setup based on real-time hand-tracking, which was used to display real-world, haptic exploration of objects in a virtual environment through a head-mounted-display (HMD). With this setup we studied multimodal conflict situations of objects varying along higher-level, parametrically-controlled global shape properties. Participants explored these objects in both unimodal and multimodal settings with the latter including congruent and incongruent conditions and differing instructions for weighting the input modalities. Results demonstrated a surprisingly clear touch dominance throughout all experiments, which in addition was only marginally influenceable through instructions to bias their modality weighting. We also present an initial analysis of the hand-tracking patterns that illustrates the potential for our setup to investigate exploration behavior in more detail.
|Number of pages||11|
|Journal||IEEE Transactions on Visualization and Computer Graphics|
|Publication status||Published - 2023 Dec 1|
Bibliographical notePublisher Copyright:
© 1995-2012 IEEE.
- Mixed reality
- hand tracking
- multisensory perception
ASJC Scopus subject areas
- Signal Processing
- Computer Vision and Pattern Recognition
- Computer Graphics and Computer-Aided Design