TY - GEN
T1 - Capturing Speaker Incorrectness
T2 - 3rd Workshop on New Frontiers in Summarization, NewSum 2021
AU - Lee, Dongyub
AU - Lim, Jungwoo
AU - Whang, Taesun
AU - Lee, Chanhee
AU - Cho, Seungwoo
AU - Pak, Mingun
AU - Lim, Heuiseok
N1 - Funding Information:
This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2018-0-01405) supervised by the IITP (Institute for Information Communications Technology Planning Evaluation).
Funding Information:
This work was supported by Institute for Information communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2020-0-00368, A Neural-Symbolic Model for Knowledge Acquisition and Inference Techniques)
Publisher Copyright:
© 2021 Association for Computational Linguistics.
PY - 2021
Y1 - 2021
N2 - In this paper, we focus on improving the quality of the summary generated by neural abstractive dialogue summarization systems. Even though pre-trained language models generate well-constructed and promising results, it is still challenging to summarize the conversation of multiple participants since the summary should include a description of the overall situation and the actions of each speaker. This paper proposes self-supervised strategies for speaker-focused post-correction in abstractive dialogue summarization. Specifically, our model first discriminates which type of speaker correction is required in a draft summary and then generates a revised summary according to the required type. Experimental results show that our proposed method adequately corrects the draft summaries, and the revised summaries are significantly improved in both quantitative and qualitative evaluations.
AB - In this paper, we focus on improving the quality of the summary generated by neural abstractive dialogue summarization systems. Even though pre-trained language models generate well-constructed and promising results, it is still challenging to summarize the conversation of multiple participants since the summary should include a description of the overall situation and the actions of each speaker. This paper proposes self-supervised strategies for speaker-focused post-correction in abstractive dialogue summarization. Specifically, our model first discriminates which type of speaker correction is required in a draft summary and then generates a revised summary according to the required type. Experimental results show that our proposed method adequately corrects the draft summaries, and the revised summaries are significantly improved in both quantitative and qualitative evaluations.
UR - http://www.scopus.com/inward/record.url?scp=85138375249&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85138375249
T3 - 3rd Workshop on New Frontiers in Summarization, NewSum 2021 - Workshop Proceedings
SP - 65
EP - 73
BT - 3rd Workshop on New Frontiers in Summarization, NewSum 2021 - Workshop Proceedings
A2 - Carenini, Giuseppe
A2 - Cheung, Jackie Chi Kit
A2 - Dong, Yue
A2 - Liu, Fei
A2 - Wang, Lu
PB - Association for Computational Linguistics (ACL)
Y2 - 10 November 2021
ER -