InterSpeech 2021

GANSpeech: Adversarial Training for High-Fidelity Multi-Speaker Speech Synthesis
(3 minutes introduction)

Jinhyeok Yang (NCSOFT, Korea), Jae-Sung Bae (NCSOFT, Korea), Taejun Bak (NCSOFT, Korea), Young-Ik Kim (NCSOFT, Korea), Hoon-Young Cho (NCSOFT, Korea)
Recent advances in neural multi-speaker text-to-speech (TTS) models have enabled the generation of reasonably good speech quality with a single model and made it possible to synthesize the speech of a speaker with limited training data. Fine-tuning to the target speaker data with the multi-speaker model can achieve better quality, however, there still exists a gap compared to the real speech sample and the model depends on the speaker. In this work, we propose GANSpeech, which is a high-fidelity multi-speaker TTS model that adopts the adversarial training method to a non-autoregressive multi-speaker TTS model. In addition, we propose simple but efficient automatic scaling methods for feature matching loss used in adversarial training. In the subjective listening tests, GANSpeech significantly outperformed the baseline multi-speaker FastSpeech and FastSpeech2 models, and showed a better MOS score than the speaker-specific fine-tuned FastSpeech2.