Prototypical Knowledge Distillation for Noise Robust Keyword Spotting

Donghyeon Kim, Gwantae Kim, Bokyeung Lee, Hanseok Ko

Research output: Contribution to journalArticlepeer-review


Keyword Spotting (KWS) is an essential component in contemporary audio-based deep learning systems and should be of minimal design when the system is working in streaming and on-device environments. We presented a robust feature extraction with a single-layer dynamic convolution model in our previous work. In this letter, we expand our earlier study into multi-layers of operation and propose a robust Knowledge Distillation (KD) learning method. Based on the distribution between class-centroids and embedding vectors, we compute three distinct distance metrics for the KD training and feature extraction processes. The results indicate that our KD method shows similar KWS performance over state-of-the-art models in terms of KWS but with low computational costs. Furthermore, our proposed method results in a more robust performance in noisy environments than conventional KD methods.

Original languageEnglish
Pages (from-to)2298-2302
Number of pages5
JournalIEEE Signal Processing Letters
Publication statusPublished - 2022


  • Keyword spotting
  • knowledge distillation
  • prototypical learning

ASJC Scopus subject areas

  • Signal Processing
  • Applied Mathematics
  • Electrical and Electronic Engineering


Dive into the research topics of 'Prototypical Knowledge Distillation for Noise Robust Keyword Spotting'. Together they form a unique fingerprint.

Cite this