Abstract
Training an automatic speech recognition (ASR) post-processor based on sequence-to-sequence (S2S) requires a parallel pair (e.g., speech recognition result and human post-edited sentence) to construct the dataset, which demands a great amount of human labor. BackTransScription (BTS) proposes a data-building method to mitigate the limitations of the existing S2S based ASR post-processors, which can automatically generate vast amounts of training datasets, reducing time and cost in data construction. Despite the emergence of this novel approach, the BTS-based ASR post-processor still has research challenges and is mostly untested in diverse approaches. In this study, we highlight these challenges through detailed experiments by analyzing the data-centric approach (i.e., controlling the amount of data without model alteration) and the model-centric approach (i.e., model modification). In other words, we attempt to point out problems with the current trend of research pursuing a model-centric approach and alert against ignoring the importance of the data. Our experiment results show that the data-centric approach outperformed the model-centric approach by +11.69, +17.64, and +19.02 in the F1-score, BLEU, and GLEU tests.
Original language | English |
---|---|
Article number | 3618 |
Journal | Mathematics |
Volume | 10 |
Issue number | 19 |
DOIs | |
Publication status | Published - 2022 Oct |
Bibliographical note
Publisher Copyright:© 2022 by the authors.
Keywords
- automatic speech recognition
- backtranscription
- data-centric
- machine translation
- model-centric
- post-processor
ASJC Scopus subject areas
- Computer Science (miscellaneous)
- General Mathematics
- Engineering (miscellaneous)