In autonomous driving, scene understanding is a critical task in recognizing the driving environment or dangerous situations. Here, a variety of factors, including foreign objects on the lens, cloudy weather, and light blur, often reduce the accuracy of scene recognition. In this paper, we propose a new blind image inpainting model that accurately reconstructs images in a real environment where there is no ground truth for restoration. To this end, we first introduce a panoptic map to represent content information in detail and design an encoder–decoder structure to predict the panoptic map and the corrupted region mask. Then, we construct an image inpainting model that utilizes the information of the predicted map. Lastly, we present a mask refinement process to improve the accuracy of map prediction. To evaluate the effectiveness of the proposed model, we compared the restoration results of various inpainting methods on the cityscapes and coco datasets. Experimental results show that the proposed model outperforms other blind image inpainting models in terms of L1/L2 losses, PSNR and SSIM, and achieves similar performance to other image inpainting techniques that utilize additional information.
Bibliographical noteFunding Information:
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2021R1A4A1031864 ).
© 2022 ISA
- Blind image inpainting
- Contextual information
- Generative Adversarial Networks
- Image restoration
- Panoptic segmentation
ASJC Scopus subject areas
- Control and Systems Engineering
- Computer Science Applications
- Electrical and Electronic Engineering
- Applied Mathematics