Abstract
Conventional methods of real time sound effects in 3D graphical and virtual environments relied upon preparing all the needed samples ahead of time and simply replaying them as needed, or parametrically modifying a basic set of samples using physically based techniques such as the spring-damper simulation and modal analysis/synthesis. In this work, we propose to apply the generative adversarial network (GAN) approach to the problem at hand, with which only one generator is trained to produce the needed sounds fast with perceptually indifferent quality. Otherwise, with the conventional methods, separate and approximate models would be needed to deal with different material properties and contact types, and manage real time performance. We demonstrate our claim by training a GAN (more specifically WaveGAN) with sounds of different drums and synthesizing the sounds on the fly for a virtual drum playing environment. The perceptual test revealed that the subjects could not discern the synthesized sounds from the ground truth nor perceived any noticeable delay upon the corresponding physical event.
Original language | English |
---|---|
Title of host publication | Proceedings - 2018 IEEE International Conference on Artificial Intelligence and Virtual Reality, AIVR 2018 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 144-148 |
Number of pages | 5 |
ISBN (Electronic) | 9781538692691 |
DOIs | |
Publication status | Published - 2018 Jul 2 |
Event | 1st IEEE International Conference on Artificial Intelligence and Virtual Reality, AIVR 2018 - Taichung, Taiwan, Province of China Duration: 2018 Dec 10 → 2018 Dec 12 |
Publication series
Name | Proceedings - 2018 IEEE International Conference on Artificial Intelligence and Virtual Reality, AIVR 2018 |
---|
Conference
Conference | 1st IEEE International Conference on Artificial Intelligence and Virtual Reality, AIVR 2018 |
---|---|
Country/Territory | Taiwan, Province of China |
City | Taichung |
Period | 18/12/10 → 18/12/12 |
Bibliographical note
Funding Information:This research was partially supported by Inst. for Info. Comm. Tech. Promotion (IITP) grant funded by the Korean government (MSIP No.2017-0-00179),and the Global Frontier RD Program on Human-centered Interaction for Coexistence funded by the NRF of Korea (NRF-2015M3A6A3076490)
Publisher Copyright:
© 2018 IEEE.
Keywords
- Generation of immersive environments and virtual worlds
- Machine learning for multimodal interaction
- Multimodal interaction and experiences in VR/AR
ASJC Scopus subject areas
- Artificial Intelligence
- Media Technology
- Computer Science Applications