Multi-View Attention Transfer for Efficient Speech Enhancement

Wooseok Shin, Hyun Joon Park, Jin Sob Kim, Byung Hoon Lee, Sung Won Han

    Research output: Contribution to journalConference articlepeer-review

    8 Citations (Scopus)

    Abstract

    Recent deep learning models have achieved high performance in speech enhancement; however, it is still challenging to obtain a fast and low-complexity model without significant performance degradation. Previous knowledge distillation studies on speech enhancement could not solve this problem because their output distillation methods do not fit the speech enhancement task in some aspects. In this study, we propose multi-view attention transfer (MV-AT), a feature-based distillation, to obtain efficient speech enhancement models in the time domain. Based on the multi-view features extraction model, MV-AT transfers multi-view knowledge of the teacher network to the student network without additional parameters. The experimental results show that the proposed method consistently improved the performance of student models of various sizes on the Valentini and deep noise suppression (DNS) datasets. MANNER-S-8.1GF with our proposed method, a lightweight model for efficient deployment, achieved 15.4× and 4.71× fewer parameters and floating-point operations (FLOPs), respectively, compared to the baseline model with similar performance.

    Original languageEnglish
    Pages (from-to)1198-1202
    Number of pages5
    JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
    Volume2022-September
    DOIs
    Publication statusPublished - 2022
    Event23rd Annual Conference of the International Speech Communication Association, INTERSPEECH 2022 - Incheon, Korea, Republic of
    Duration: 2022 Sept 182022 Sept 22

    Bibliographical note

    Funding Information:
    This research was supported by Brain Korea 21 FOUR. This research was also supported by Korea University Grant (K2202151).

    Publisher Copyright:
    Copyright © 2022 ISCA.

    Keywords

    • feature distillation
    • low complexity
    • multi-view knowledge distillation
    • speech enhancement
    • time domain

    ASJC Scopus subject areas

    • Language and Linguistics
    • Human-Computer Interaction
    • Signal Processing
    • Software
    • Modelling and Simulation

    Fingerprint

    Dive into the research topics of 'Multi-View Attention Transfer for Efficient Speech Enhancement'. Together they form a unique fingerprint.

    Cite this