With the recent increase in requirement of both natural-language and visual information, the demand for research on seamless multi-modal processing for effective retrieval of these types of information has increased. However, because of the unstructured nature of images, it is difficult to retrieve images that accurately represent the input text. In this study, we utilized an augmented version of a multi-generator generative adversarial network that uses BERT embeddings and attention maps as input to enable grounded vocabulary for visual representations. We compared the performance of our proposed model with those of other state-of-the-art text input-based image retrieval methods on the MSCOCO and Flikr30K datasets, and the results showed the potential of our proposed method. Even with limited vocabulary, our proposed model was comparable to other state-of-the-art performances on R@10 or even exceed them in R@1. Moreover, we revealed the unique properties of our method by demonstrating how it could perform successfully even when using more descriptive text or short sentences as input.
Bibliographical noteFunding Information:
This work was supported in part by the Ministry of Science and ICT, South Korea, through the Information Technology Research Center Support Program supervised by the Institute for Information and Communications Technology Planning and Evaluation under Grant IITP-2018-0-01405; in part by the Korean Government (A Neural-Symbolic Model for Knowledge Acquisition and Inference Techniques) under Grant 2020-0-00368; and in part by the Ministry of Science and ICT, South Korea, through the ICT Creative Consilience Program supervised by the Institute for Information and Communications Technology Planning and Evaluation under Grant IITP-2021-2020-0-01819.
© 2021 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
- Artificial intelligence
- artificial neural network
- computer vision
- image processing
- search methods
ASJC Scopus subject areas
- Materials Science(all)
- Computer Science(all)