RORD: A Real-world Object Removal Dataset

Min Cheol Sagong, Yoon Jae Yeo, Seung Won Jung, Sung Jea Ko

Research output: Contribution to conferencePaperpeer-review


Various convolutional neural networks (CNNs)-based image inpainting techniques have been actively studied to remove unwanted objects or restore missing parts in recent years. The common standard for training image inpainting CNNs is synthesising hole regions on the existing datasets, such as ImageNet and Places2. However, from the viewpoint of the object removal task, such a methodology is suboptimal because actual pixels behind objects, i.e., “ground truth”, cannot be used for training. Facing this problem, we introduce Real-world Object Removal Dataset (RORD), a large-scale collection of image pairs with and without objects. RORD consists of a wide range of real-world scenes, plus two types of pixel-accurate annotations, i.e., object mask and segmentation map. Our dataset allows existing image inpainting models to be trained accurately as well as evaluated with high confidence. In this paper, we describe in detail how the dataset is constructed and demonstrate the validity and usability of RORD.

Original languageEnglish
Publication statusPublished - 2022
Event33rd British Machine Vision Conference Proceedings, BMVC 2022 - London, United Kingdom
Duration: 2022 Nov 212022 Nov 24


Conference33rd British Machine Vision Conference Proceedings, BMVC 2022
Country/TerritoryUnited Kingdom

Bibliographical note

Publisher Copyright:
© 2022. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition


Dive into the research topics of 'RORD: A Real-world Object Removal Dataset'. Together they form a unique fingerprint.

Cite this