Abstract
This paper presents a Knowledge Distillation (KD) learning for a dynamic convolution in the front end. In our previous work, we confirmed that applying a dynamic filtering to the front end of a classifier would improve the performance of the classification in noisy environments. The main goal of this study is to develop a Relational Knowledge Distillation (RKD) framework for dynamic convolution. The teacher model consists of six layers of dynamic convolution. The student model is constructed with a single layer of dynamic convolution, trained by the RKD loss and classifier loss. For performance evaluation, the experiments are carried out by a classification task using Keyword Spotting (KWS). The experimental results show that the proposed KD method improves the KWS performance in noisy environments over the baseline student model, which is trained only by the classifier loss.
Original language | English |
---|---|
Journal | Proceedings of the International Congress on Acoustics |
Publication status | Published - 2022 |
Event | 24th International Congress on Acoustics, ICA 2022 - Gyeongju, Korea, Republic of Duration: 2022 Oct 24 → 2022 Oct 28 |
Bibliographical note
Publisher Copyright:© 2022 Proceedings of the International Congress on Acoustics. All rights reserved.
Keywords
- Dynamic convolution
- Key word spotting
- Knowledge distillation
- Light weight
- Sound classification
ASJC Scopus subject areas
- Mechanical Engineering
- Acoustics and Ultrasonics