Abstract
Recent successes suggest that an image can be manipulated by a text prompt, e.g., a landscape scene on a sunny day is manipulated into the same scene on a rainy day driven by a text input “raining”. These approaches often utilize a StyleCLIP-based image generator, which leverages multi-modal (text and image) embedding space. However, we observe that such text inputs are often bottlenecked in providing and synthesizing rich semantic cues, e.g., differentiating heavy rain from rain with thunderstorms. To address this issue, we advocate leveraging an additional modality, sound, which has notable advantages in image manipulation as it can convey more diverse semantic cues (vivid emotions or dynamic expressions of the natural world) than texts. In this paper, we propose a novel approach that first extends the image–text joint embedding space with sound and applies a direct latent optimization method to manipulate a given image based on audio input, e.g., the sound of rain. Our extensive experiments show that our sound-guided image manipulation approach produces semantically and visually more plausible manipulation results than the state-of-the-art text and sound-guided image manipulation methods, which are further confirmed by our human evaluations. Our downstream task evaluations also show that our learned image–text-sound joint embedding space effectively encodes sound inputs. Examples are provided in our project page: https://kuai-lab.github.io/robust-demo/.
Original language | English |
---|---|
Article number | 106271 |
Journal | Neural Networks |
Volume | 175 |
DOIs | |
Publication status | Published - 2024 Jul |
Bibliographical note
Publisher Copyright:© 2024 Elsevier Ltd
Keywords
- Image manipulation
- Multi-modal representation learning
- Self-supervised learning
- Sound
ASJC Scopus subject areas
- Cognitive Neuroscience
- Artificial Intelligence