Voice Spoofing Detection Through Residual Network, Max Feature Map, and Depthwise Separable Convolution

Il Youp Kwak, Sungsu Kwag, Junhee Lee, Youngbae Jeon, Jeonghwan Hwang, Hyo Jung Choi, Jong Hoon Yang, So Yul Han, Jun Ho Huh, Choong Hoon Lee, Ji Won Yoon

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)


The goal of the '2019 Automatic Speaker Verification Spoofing and Countermeasures Challenge' (ASVspoof) was to make it easier to create systems that could identify voice spoofing attacks with high levels of accuracy. However, model complexity and latency requirements were not emphasized in the competition, despite the fact that they are stringent requirements for implementation in the real world. The majority of the top-performing solutions from the competition used an ensemble technique that merged numerous sophisticated deep learning models to maximize detection accuracy. Those approaches struggle with real-world deployment restrictions for voice assistants which would have restricted resources. We merged skip connection (from ResNet) and max feature map (from Light CNN) to create a compact system, and we tested its performance using the ASVspoof 2019 dataset. Our single model achieved a replay attack detection equal error rate (EER) of 0.30% on the evaluation set using an optimized constant Q transform (CQT) feature, outperforming the top ensemble system in the competition, which scored an EER of 0.39%. We experimented using depthwise separable convolutions (from MobileNet) to reduce model sizes; this resulted in an 84.3 percent reduction in parameter count (from 286K to 45K), while maintaining similar performance (EER of 0.36%). Additionally, we used Grad-CAM to clarify which spectrogram regions significantly contribute to the detection of fake data.

Original languageEnglish
Pages (from-to)49140-49152
Number of pages13
JournalIEEE Access
Publication statusPublished - 2023

Bibliographical note

Funding Information:
This work was supported in part by the National Research Foundation of Korea (NRF) grant funded by the Ministry of Science and Information Communication Technology (ICT) under Grant RS-2023-00208284, and in part by Chung-Ang University Research Grant in 2022.

Publisher Copyright:
© 2013 IEEE.


  • Voice assistant security
  • voice presentation attack detection
  • voice spoofing attack
  • voice synthesis attack

ASJC Scopus subject areas

  • General Computer Science
  • General Materials Science
  • General Engineering


Dive into the research topics of 'Voice Spoofing Detection Through Residual Network, Max Feature Map, and Depthwise Separable Convolution'. Together they form a unique fingerprint.

Cite this