Reference Guided Image Inpainting using Facial Attributes

Dongsik Yoon, Jeonggi Kwak, Yuanming Li, David Han, Youngsaeng Jin, Hanseok Ko

Research output: Contribution to conferencePaperpeer-review


Image inpainting is a technique of completing missing pixels such as occluded region restoration, distracting objects removal, and facial completion. Among these inpainting tasks, facial completion algorithm performs face inpainting according to the user direction. Existing approaches require delicate and well controlled input by the user, thus it is difficult for an average user to provide the guidance sufficiently accurate for the algorithm to generate desired results. To overcome this limitation, we propose an alternative user-guided inpainting architecture that manipulates facial attributes using a single reference image as the guide. Our end-to-end model consists of attribute extractors for accurate reference image attribute transfer and an inpainting model to map the attributes realistically and accurately to generated images. We customize MS-SSIM loss and learnable bidirectional attention maps in which importance structures remain intact even with irregular shaped masks. Based on our evaluation using the publicly available dataset CelebA-HQ, we demonstrate that the proposed method delivers superior performance compared to some state-of-the-art methods specialized in inpainting tasks.

Original languageEnglish
Publication statusPublished - 2021
Event32nd British Machine Vision Conference, BMVC 2021 - Virtual, Online
Duration: 2021 Nov 222021 Nov 25


Conference32nd British Machine Vision Conference, BMVC 2021
CityVirtual, Online

Bibliographical note

Publisher Copyright:
© 2021. The copyright of this document resides with its authors.

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition


Dive into the research topics of 'Reference Guided Image Inpainting using Facial Attributes'. Together they form a unique fingerprint.

Cite this