Abstract
Speech-based interfaces are convenient and intuitive, and therefore, strongly preferred by Internet of Things (IoT) devices for human-computer interaction. Pre-defined keywords are typically used as a trigger to notify devices for inputting the subsequent voice commands. Keyword spotting techniques used as voice trigger mechanisms, typically model the target keyword via triphone models and non-keywords through single-state filler models. Recently, deep neural networks (DNNs) have shown better performance compared to hidden Markov models with Gaussian mixture models, in various tasks including speech recognition. However, conventional DNN-based keyword spotting methods cannot change the target keywords easily, which is an essential feature for speech-based IoT device interface. Additionally, the increase in computational requirements interferes with the use of complex filler models in DNN-based keyword spotting systems, which diminishes the accuracy of such systems. In this paper, we propose a novel DNN-based keyword spotting system that alters the keyword on the fly and utilizes triphone and monophone acoustic models in an effort to reduce computational complexity and increase generalization performance. The experimental results using the FFMTIMIT corpus show that the error rate of the proposed method was reduced by 36.6%.
Original language | English |
---|---|
Article number | 8641328 |
Pages (from-to) | 188-194 |
Number of pages | 7 |
Journal | IEEE Transactions on Consumer Electronics |
Volume | 65 |
Issue number | 2 |
DOIs | |
Publication status | Published - 2019 May |
Keywords
- Deep neural network
- keyword spotting
- multitask learning
ASJC Scopus subject areas
- Media Technology
- Electrical and Electronic Engineering