InterSpeech 2021

Conformer Parrotron: a Faster and Stronger End-to-end SpeechConversion and Recognition Model for Atypical Speech
(Oral presentation)

Zhehuai Chen (Google, USA), Bhuvana Ramabhadran (Google, USA), Fadi Biadsy (Google, USA), Xia Zhang (Google, USA), Youzheng Chen (Google, USA), Liyang Jiang (Google, USA), Fang Chu (Google, USA), Rohan Doshi (Google, USA), Pedro J. Moreno (Google, USA)
Parrotron is an end-to-end personalizable model that enables many-to-one voice conversion (VC) and automated speech recognition (ASR) simultaneously for atypical speech. In this work, we present the next-generation Parrotron model with improvements in overall accuracy, training and inference speeds. The proposed architecture builds on the recent Conformer encoder comprising of convolution and attention layer based blocks used in ASR. We introduce architectural modifications that subsamples encoder activations to achieve speed-ups in training and inference. In order to jointly improve ASR and voice conversion quality, we show that this requires a corresponding upsampling after the Conformer encoder blocks. We provide an in-depth analysis on how the proposed approach can maximize the efficiency of a speech-to-speech conversion model in the context of atypical speech. Experiments on both many-to-one and one-to-one dysarthric speech conversion tasks show that we can achieve up to 7× speedup and 35% relative reduction in WER over the previous best Transformer Parrotron.