TY - GEN
T1 - Sound-Guided Semantic Image Manipulation
AU - Lee, Seung Hyun
AU - Roh, Wonseok
AU - Byeon, Wonmin
AU - Yoon, Sang Ho
AU - Kim, Chanyoung
AU - Kim, Jinkyu
AU - Kim, Sangpil
N1 - Funding Information:
Acknowledgement. This work was supported by the National Research Foundation of Korea grant (NRF-2021R1G1A1093855) and partially supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No. 2019-0-00079, Artificial Intelligence Graduate School Program(Korea University)). J. Kim is partially supported by the National Research Foundation of Korea grant (NRF-2021R1C1C1009608), Basic Science Research Program (NRF-2021R1A6A1A13044830), and ICT Creative Consilience program (IITP-2022-2022-0-01819). S. Yoon is supported by KAIST grant (G04210059). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding agency.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - The recent success of the generative model shows that leveraging the multi-modal embedding space can manipu-late an image using text information. However, manipulating an image with other sources rather than text, such as sound, is not easy due to the dynamic characteristics of the sources. Especially, sound can convey vivid emotions and dynamic expressions of the real world. Here, we propose a framework that directly encodes sound into the multi-modal (image-text) embedding space and manipulates an image from the space. Our audio encoder is trained to pro-duce a latent representation from an audio input, which is forced to be aligned with image and text representations in the multi-modal embedding space. We use a direct latent op-timization method based on aligned embeddings for sound-guided image manipulation. We also show that our method can mix different modalities, i.e., text and audio, which en-rich the variety of the image modification. The experiments on zero-shot audio classification and semantic-level image classification show that our proposed model outperforms other text and sound-guided state-of-the-art methods.
AB - The recent success of the generative model shows that leveraging the multi-modal embedding space can manipu-late an image using text information. However, manipulating an image with other sources rather than text, such as sound, is not easy due to the dynamic characteristics of the sources. Especially, sound can convey vivid emotions and dynamic expressions of the real world. Here, we propose a framework that directly encodes sound into the multi-modal (image-text) embedding space and manipulates an image from the space. Our audio encoder is trained to pro-duce a latent representation from an audio input, which is forced to be aligned with image and text representations in the multi-modal embedding space. We use a direct latent op-timization method based on aligned embeddings for sound-guided image manipulation. We also show that our method can mix different modalities, i.e., text and audio, which en-rich the variety of the image modification. The experiments on zero-shot audio classification and semantic-level image classification show that our proposed model outperforms other text and sound-guided state-of-the-art methods.
KW - Image and video synthesis and generation
KW - Self-& semi-& meta- & unsupervised learning
UR - http://www.scopus.com/inward/record.url?scp=85128328141&partnerID=8YFLogxK
U2 - 10.1109/CVPR52688.2022.00337
DO - 10.1109/CVPR52688.2022.00337
M3 - Conference contribution
AN - SCOPUS:85128328141
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
SP - 3367
EP - 3376
BT - Proceedings - 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022
PB - IEEE Computer Society
T2 - 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022
Y2 - 19 June 2022 through 24 June 2022
ER -