InterSpeech 2021

Zero-Shot Text-to-Speech for Text-Based Insertion in Audio Narration
(3 minutes introduction)

Chuanxin Tang (Microsoft, China), Chong Luo (Microsoft, China), Zhiyuan Zhao (Microsoft, China), Dacheng Yin (USTC, China), Yucheng Zhao (USTC, China), Wenjun Zeng (Microsoft, China)
Given a piece of speech and its transcript text, text-based speech editing aims to generate speech that can be seamlessly inserted into the given speech by editing the transcript. Existing methods adopt a two-stage approach: synthesize the input text using a generic text-to-speech (TTS) engine and then transform the voice to the desired voice using voice conversion (VC). A major problem of this framework is that VC is a challenging problem which usually needs a moderate amount of parallel training data to work satisfactorily. In this paper, we propose a one-stage context-aware framework to generate natural and coherent target speech without any training data of the target speaker. In particular, we manage to perform accurate zero-shot duration prediction for the inserted text. The predicted duration is used to regulate both text embedding and speech embedding. Then, based on the aligned cross-modality input, we directly generate the mel-spectrogram of the edited speech with a transformer-based decoder. Subjective listening tests show that despite the lack of training data for the speaker, our method has achieved satisfactory results. It outperforms a recent zero-shot TTS engine by a large margin.