Abstract
Text summarization is an information condensation technique that abbreviates a source document to a few representative sentences with the intention to create a coherent summary containing relevant information of source corpora. This promising subject has been rapidly developed since the advent of deep learning. However, summarization models based on deep neural network have several critical shortcomings. First, a large amount of labeled training data is necessary. This problem is standard for low-resource languages in which publicly available labeled data do not exist. In addition, a significant amount of computational ability is required to train neural models with enormous network parameters. In this study, we propose a model called Learning Free Integer Programming Summarizer (LFIP-SUM), which is an unsupervised extractive summarization model. The advantage of our approach is that parameter training is unnecessary because the model does not require any labeled training data. To achieve this, we formulate an integer programming problem based on pre-trained sentence embedding vectors. We also use principal component analysis to automatically determine the number of sentences to be extracted and to evaluate the importance of each sentence. Experimental results demonstrate that the proposed model exhibits generally acceptable performance compared with deep learning summarization models although it does not learn any parameters during the model construction process.
Original language | English |
---|---|
Article number | 9321308 |
Pages (from-to) | 14358-14368 |
Number of pages | 11 |
Journal | IEEE Access |
Volume | 9 |
DOIs | |
Publication status | Published - 2021 |
Bibliographical note
Publisher Copyright:© 2013 IEEE.
Keywords
- Text summarization
- integer linear programming
- natural language processing
- sentence representation vector
ASJC Scopus subject areas
- General Computer Science
- General Materials Science
- General Engineering