Although supervised face anti-spoofing (FAS) methods have shown remarkable performance, they suffer from poor generalization to unseen attacks. Many existing methods employed domain adaptation (DA) or domain generalization (DG) techniques to reduce domain variations. However, previous works have yet to fully explore domain-specific style information within intermediate layers that can give knowledge about face attack styles (e.g., illumination, backgrounds, and materials). In this paper, we present a new framework, Meta Style Selective Normalization (MetaSSN) for test-time domain adaptive FAS. Specifically, we propose style selective normalization (SSN) that statistically estimates the domain-specific image style of individual domains. SSN facilitates adaptation of the network to the target image by selecting the optimal normalization parameters to reduce style discrepancy between source and target domain. Furthermore, we meticulously design the training strategy in a meta-learning pipeline to simulate test-time adaptation using the style selection process with virtual test domain, which can boost the adaptation capability. In contrast to the previous DA approaches, our framework is more practical since it does not necessitate additional auxiliary networks (e.g., domain adaptors) during training. To validate our method, we utilized public FAS datasets: CASIA-FASD, MSU-MFSD, Oulu NPU, and Idiap Replay Attack. In most assessments, our results demonstrate a significant gap in performance relative to conventional FAS methods.
Bibliographical noteFunding Information:
This work was supported by Institute of Information & communications Technology Planning Evaluation (IITP) grant funded by the Korea government(MSIT) (No. 2019-0-00079 , Artificial Intelligence Graduate School Program, Korea University ).
© 2022 Elsevier Ltd
- Domain adaptation
- Face anti-spoofing
- Model-agnostic meta-learning
ASJC Scopus subject areas
- Computer Science Applications
- Artificial Intelligence