Recurrent neural network-based semantic variational autoencoder for Sequence-to-sequence learning

Myeongjun Jang, Seungwan Seo, Pilsung Kang

Research output: Contribution to journalArticlepeer-review

40 Citations (Scopus)


Sequence-to-sequence (Seq2seq) models have played an important role in the recent success of various natural language processing methods, such as machine translation, text summarization, and speech recognition. However, current Seq2seq models have trouble preserving global latent information from a long sequence of words. Variational autoencoder (VAE) alleviates this problem by learning a continuous semantic space of the input sentence. However, it does not solve the problem completely. In this paper, we propose a new recurrent neural network (RNN)-based Seq2seq model, RNN semantic variational autoencoder (RNN–SVAE), to better capture the global latent information of a sequence of words. To suitably reflect the meanings of words in a sentence regardless of their position within the sentence, we utilized two approaches: (1) constructing a document information vector based on the attention information between the final state of the encoder and every prior hidden state, and (2) extracting the semantic vector based on the self-attention mechanism. Then, the mean and standard deviation of the continuous semantic space are learned by using this vector to take advantage of the variational method. By using the document information vector and the self-attention mechanism to find the semantic space of the sentence, it becomes possible to better capture the global latent feature of the sentence. Experimental results of three natural language tasks (i.e., language modeling, missing word imputation, paraphrase identification) confirm that the proposed RNN–SVAE yields higher performance than two benchmark models.

Original languageEnglish
Pages (from-to)59-73
Number of pages15
JournalInformation Sciences
Publication statusPublished - 2019 Jul
Externally publishedYes

Bibliographical note

Funding Information:
We sincerely appreciate the two anonymous reviewers’ valuable comments, especially concerning the self-attention mechanism. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education ( NRF-2016R1D1A1B03930729 ) and Korea Electric Power Corporation (Grant number: R18XA05 ).

Publisher Copyright:
© 2019


  • Auto-encoder
  • Document information vector
  • Natural language processing
  • Recurrent neural network
  • Self attention mechanism
  • Sequence-to-sequence learning
  • Variational method

ASJC Scopus subject areas

  • Software
  • Information Systems and Management
  • Artificial Intelligence
  • Theoretical Computer Science
  • Control and Systems Engineering
  • Computer Science Applications


Dive into the research topics of 'Recurrent neural network-based semantic variational autoencoder for Sequence-to-sequence learning'. Together they form a unique fingerprint.

Cite this