URHand: Universal Relightable Hands

  • Zhaoxi Chen
  • , Gyeongsik Moon
  • , Kaiwen Guo
  • , Chen Cao
  • , Stanislav Pidhorskyi
  • , Tomas Simon
  • , Rohan Joshi
  • , Yuan Dong
  • , Yichen Xu
  • , Bernardo Pires
  • , He Wen
  • , Lucas Evans
  • , Bo Peng
  • , Julia Buffalini
  • , Autumn Trimble
  • , Kevyn Mcphail
  • , Melissa Schoeller
  • , Shoou I. Yu
  • , Javier Romero
  • , Michael Zollhofer
  • Yaser Sheikh, Ziwei Liu*, Shunsuke Saito*
*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Existing photorealistic relightable hand models require extensive identity-specific observations in different views, poses, and illuminations, and face challenges in generalizing to natural illuminations and novel identities. To bridge this gap, we present URHand, the first universal relightable hand model that generalizes across viewpoints, poses, illuminations, and identities. Our model allows few-shot personalization using images captured with a mobile phone, and is ready to be photorealistically rendered under novel illuminations. To simplify the personalization process while retaining photorealism, we build a powerful universal relightable prior based on neural relighting from multi-view images of hands captured in a light stage with hundreds of identities. The key challenge is scaling the cross-identity training while maintaining personalized fidelity and sharp details without compromising generalization under natural illuminations. To this end, we propose a spatially varying linear lighting model as the neural renderer that takes physics-inspired shading as input feature. By removing non-linear activations and bias, our specifically designed lighting model explicitly keeps the linearity of light transport. This enables single-stage training from light-stage data while generalizing to real-time rendering under arbitrary continuous illuminations across diverse identities. In addition, we introduce the joint learning of a physically based model and our neural relighting model, which further improves fidelity and generalization. Extensive experiments show that our approach achieves superior performance over existing methods in terms of both quality and generalizability. We also demonstrate quick personalization of URHand from a short phone scan of an unseen identity.

Original languageEnglish
Title of host publicationProceedings - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024
PublisherIEEE Computer Society
Pages119-129
Number of pages11
ISBN (Electronic)9798350353006
ISBN (Print)9798350353006
DOIs
Publication statusPublished - 2024
Externally publishedYes
Event2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024 - Seattle, United States
Duration: 2024 Jun 162024 Jun 22

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN (Print)1063-6919

Conference

Conference2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024
Country/TerritoryUnited States
CitySeattle
Period24/6/1624/6/22

Bibliographical note

Publisher Copyright:
© 2024 IEEE.

Keywords

  • 3D Hand Modeling
  • Generalizable Modeling
  • Photorealistic Rendering
  • Relighting

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'URHand: Universal Relightable Hands'. Together they form a unique fingerprint.

Cite this