Double-attention mechanism of sequence-to-sequence deep neural networks for automatic speech recognition

Dongsuk Yook, Dan Lim, In Chul Yoo

Research output: Contribution to journalArticlepeer-review


Sequence-to-sequence deep neural networks with attention mechanisms have shown superior performance across various domains, where the sizes of the input and the output sequences may differ. However, if the input sequences are much longer than the output sequences, and the characteristic of the input sequence changes within a single output token, the conventional attention mechanisms are inappropriate, because only a single context vector is used for each output token. In this paper, we propose a double-attention mechanism to handle this problem by using two context vectors that cover the left and the right parts of the input focus separately. The effectiveness of the proposed method is evaluated using speech recognition experiments on the TIMIT corpus.

Original languageEnglish
Pages (from-to)476-482
Number of pages7
JournalJournal of the Acoustical Society of Korea
Issue number5
Publication statusPublished - 2020

Bibliographical note

Funding Information:
This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (NRF-2017R1E1A1 A01078157). Also, it was partly supported by the MSIT (Ministry of Science and ICT) under the ITRC (Information Technology Research Center) support program (IITP-2018-0-01405) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation), and IITP grant funded by the Korean government (MSIT) (No. 2018-0-00269, A research on safe and convenient big data processing methods).

Publisher Copyright:
Copyright © 2020 The Acoustical Society of Korea.


  • Attention
  • Automatic speech recognition
  • Deep neural network
  • Sequence-to-sequence

ASJC Scopus subject areas

  • Acoustics and Ultrasonics
  • Instrumentation
  • Applied Mathematics
  • Signal Processing
  • Speech and Hearing


Dive into the research topics of 'Double-attention mechanism of sequence-to-sequence deep neural networks for automatic speech recognition'. Together they form a unique fingerprint.

Cite this