Visual context-aware attribute-preserving face de-identification

  • Hyeonwoo Kim
  • , Jonghwa Shim
  • , Sungwoo Park
  • , Eenjun Hwang*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Recent advancements in face de-identification using deep learning have achieved significant progress in generating realistic de-identified face images. However, most existing de-identification methods anonymize identities by inpainting the entire face or specific regions, often altering essential facial attributes such as emotion, gender, age, or ethnicity. This alteration can compromise the original purpose of collected data. While attribute-preserving de-identification methods have been proposed to address this issue, they rely on labeled data for the attributes to be preserved and require model retraining whenever the target attributes are modified. To overcome these limitations, we propose a visual context-aware attribute-preserving face de-identification method. This method leverages the semantic relationship between images and textual descriptions to identify and preserve visually significant features that encapsulate the overall context of the facial image, while effectively anonymizing identity-related information. We extract comprehensive facial attribute features through an attribute encoder trained on paired image and textual descriptions. Then, we map the extracted features onto a semantic space using a semantic map to preserve facial attributes by aligning them with the structural information of the face during de-identification. This allows for robust face de-identification while maintaining critical attributes. To assess the preservation of facial attributes such as gender, age, and race, we conduct a comparative evaluation using classification networks, confirming the effectiveness of our method in maintaining these attributes. Additionally, our method achieves 4–30 % improvement in image quality and 4–9 % in attribute similarity on CelebAMask, and 6–8 % and 1–6 % improvements, respectively, on FFHQ.

Original languageEnglish
Article number130205
JournalNeurocomputing
Volume638
DOIs
Publication statusPublished - 2025 Jul 14

Bibliographical note

Publisher Copyright:
© 2025 Elsevier B.V.

Keywords

  • Face de-identification
  • Generative model
  • Privacy protection

ASJC Scopus subject areas

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Visual context-aware attribute-preserving face de-identification'. Together they form a unique fingerprint.

Cite this