Abstract
Reasoning, a trait of cognitive intelligence, is regarded as a crucial ability that distinguishes humans from other species. However, neural networks now pose a challenge to this human ability. Text-to-image synthesis is a class of vision and linguistics, wherein the goal is to learn multimodal representations between the image and text features. Hence, it requires a high-level reasoning ability that understands the relationships between objects in the given text and generates high-quality images based on the understanding. Text-to-image translation can be termed as the visual thinking of neural networks. In this study, our model infers the complicated relationships between objects in the given text and generates the final image by leveraging the previous history. We define diverse novel adversarial loss functions and finally demonstrate the best one that elevates the reasoning ability of the text-to-image synthesis. Remarkably, most of our models possess their own reasoning ability. Quantitative and qualitative comparisons with several methods demonstrate the superiority of our approach.
Original language | English |
---|---|
Article number | 9410550 |
Pages (from-to) | 64510-64523 |
Number of pages | 14 |
Journal | IEEE Access |
Volume | 9 |
DOIs | |
Publication status | Published - 2021 |
Bibliographical note
Funding Information:This work was supported in part by the Ministry of Science and ICT (MSIT), South Korea, through the Information Technology Research Center (ITRC) Support Program Supervised by the Institute for Information and Communications Technology Planning and Evaluation (IITP) under Grant IITP-2018-0-01405, and in part by the IITP through the MSIT under Grant IITP-2020-0-00368.
Publisher Copyright:
© 2013 IEEE.
Keywords
- Generative adversarial networks
- image generation
- multimodal learning
- multimodal representation
- text-to-image synthesis
ASJC Scopus subject areas
- General Computer Science
- General Materials Science
- General Engineering