Abstract
Various convolutional neural networks (CNNs)-based image inpainting techniques have been actively studied to remove unwanted objects or restore missing parts in recent years. The common standard for training image inpainting CNNs is synthesising hole regions on the existing datasets, such as ImageNet and Places2. However, from the viewpoint of the object removal task, such a methodology is suboptimal because actual pixels behind objects, i.e., “ground truth”, cannot be used for training. Facing this problem, we introduce Real-world Object Removal Dataset (RORD), a large-scale collection of image pairs with and without objects. RORD consists of a wide range of real-world scenes, plus two types of pixel-accurate annotations, i.e., object mask and segmentation map. Our dataset allows existing image inpainting models to be trained accurately as well as evaluated with high confidence. In this paper, we describe in detail how the dataset is constructed and demonstrate the validity and usability of RORD.
Original language | English |
---|---|
Publication status | Published - 2022 |
Event | 33rd British Machine Vision Conference Proceedings, BMVC 2022 - London, United Kingdom Duration: 2022 Nov 21 → 2022 Nov 24 |
Conference
Conference | 33rd British Machine Vision Conference Proceedings, BMVC 2022 |
---|---|
Country/Territory | United Kingdom |
City | London |
Period | 22/11/21 → 22/11/24 |
Bibliographical note
Publisher Copyright:© 2022. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.
ASJC Scopus subject areas
- Computer Vision and Pattern Recognition