Duration Controllable Voice Conversion via Phoneme-Based Information Bottleneck

Sang Hoon Lee, Hyeong Rae Noh, Woo Jeoung Nam, Seong Whan Lee

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)


Several voice conversion (VC) methods using a simple autoencoder with a carefully designed information bottleneck have recently been studied. In general, they extract content information from a given speech through the information bottleneck between the encoder and the decoder, providing it to the decoder along with the target speaker information to generate the converted speech. However, their performance is highly dependent on the downsampling factor of an information bottleneck. In addition, such frame-by-frame conversion methods cannot convert speaking styles associated with the length of utterance, such as the duration. In this paper, we propose a novel duration controllable voice conversion (DCVC) model, which can transfer the speaking style and control the speed of the converted speech through a phoneme-based information bottleneck. The proposed information bottleneck does not need to find an appropriate downsampling factor, achieving a better audio quality and VC performance. In our experiments, DCVC outperformed the baseline models with a 3.78 MOS and a 3.83 similarity score. It can also smoothly control the speech duration while achieving a 39.35x speedup compared with a Seq2seq-based VC in terms of the inference speed.

Original languageEnglish
Pages (from-to)1173-1183
Number of pages11
JournalIEEE/ACM Transactions on Audio Speech and Language Processing
Publication statusPublished - 2022

Bibliographical note

Publisher Copyright:
© 2014 IEEE.


  • Information bottleneck
  • non-autoregressive model
  • voice conversion
  • voice style transfer

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • Acoustics and Ultrasonics
  • Computational Mathematics
  • Electrical and Electronic Engineering


Dive into the research topics of 'Duration Controllable Voice Conversion via Phoneme-Based Information Bottleneck'. Together they form a unique fingerprint.

Cite this