In this paper, we propose to use the deep learning technique to estimate and predict the torso direction from the head movements alone. The prediction allows to implement the walk-in-place navigation interface without additional sensing of the torso direction, and thereby improves the convenience and usability. We created a small dataset and tested our idea by training an LSTM model and obtained a 3-class prediction rate of about 90%, a figure higher than using other conventional machine learning techniques. While preliminary, the results show the possible inter-dependence between the viewing and torso directions, and with richer dataset and more parameters, a more accurate level of prediction seems possible.
|Title of host publication
|Proceedings - VRST 2019
|Subtitle of host publication
|25th ACM Symposium on Virtual Reality Software and Technology
|Stephen N. Spencer
|Association for Computing Machinery
|Published - 2019 Nov 12
|25th ACM Symposium on Virtual Reality Software and Technology, VRST 2019 - Sydney, Australia
Duration: 2019 Nov 12 → 2019 Nov 15
|Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST
|25th ACM Symposium on Virtual Reality Software and Technology, VRST 2019
|19/11/12 → 19/11/15
Bibliographical noteFunding Information:
This research is supported by the Technology Development Program of Ministry of SMEs and Startups, and by Flagship Project of Korea Institute of Science and Technology.
© 2019 Copyright held by the owner/author(s).
- Deep learning
- Virtual reality
- Walking in place
ASJC Scopus subject areas