Abstract
We present a framework for completing high-fidelity 3D facial UV maps from single-face image. Despite the success of Generative Adversarial Networks (GANs) in this area, generating accurate UV maps from in-the-wild images remains challenging. Our approach involves a novel network called “Map and Edit” that combines a 2D generative model and a 3D prior to explicitly control the generation of multi-view faces. We use an indirect method to address domain gap issues between rendered and real images, which improves the identity consistency of the generated multi-view facial images. We also leverage synthesized multi-view images and predicted 3D information to produce texture-rich and high-resolution facial UV maps. Our model is self-supervised and does not require manual annotations or datasets. Experimental results demonstrate the effectiveness of our framework in reconstructing high-fidelity UV maps with accurate, fine details. Overall, our approach provides a promising solution to the challenges of 3D facial UV map completion from in-the-wild images.
Original language | English |
---|---|
Pages (from-to) | 68-74 |
Number of pages | 7 |
Journal | Pattern Recognition Letters |
Volume | 180 |
DOIs | |
Publication status | Published - 2024 Apr |
Bibliographical note
Publisher Copyright:© 2024 Elsevier B.V.
Keywords
- 3D face reconstruction
- 3DMM
- Multi-view face image
- UV map
ASJC Scopus subject areas
- Software
- Signal Processing
- Computer Vision and Pattern Recognition
- Artificial Intelligence