Enhancing Machine Translation Quality Estimation via Fine-Grained Error Analysis and Large Language Model

Dahyun Jung, Chanjun Park, Sugyeong Eo, Heuiseok Lim

Research output: Contribution to journalArticlepeer-review

Abstract

Fine-grained error span detection is a sub-task within quality estimation that aims to identify and assess the spans and severity of errors present in translated sentences. In prior quality estimation, the focus has predominantly been on evaluating translations at the sentence and word levels. However, such an approach fails to recognize the severity of specific segments within translated sentences. To the best of our knowledge, this is the first study that concentrates on enhancing models for this fine-grained error span detection task in machine translation. This study introduces a framework that sequentially performs sentence-level error detection, word-level error span extraction, and severity assessment. We present a detailed analysis for each of the methodologies we propose, substantiating the effectiveness of our system, focusing on two language pairs: English-to-German and Chinese-to-English. Our results suggest that task granularity enhances performance and that a prompt-based fine-tuning approach can offer optimal performance in the classification tasks. Furthermore, we demonstrate that employing a large language model to edit the fine-tuned model’s output constitutes a top strategy for achieving robust quality estimation performance.

Original languageEnglish
Article number4169
JournalMathematics
Volume11
Issue number19
DOIs
Publication statusPublished - 2023 Oct

Bibliographical note

Publisher Copyright:
© 2023 by the authors.

Keywords

  • fine-grained error span detection
  • natural language processing
  • quality estimation

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • General Mathematics
  • Engineering (miscellaneous)

Fingerprint

Dive into the research topics of 'Enhancing Machine Translation Quality Estimation via Fine-Grained Error Analysis and Large Language Model'. Together they form a unique fingerprint.

Cite this