InterSpeech 2021

A Systematic Review and Analysis of Multilingual Data Strategies in Text-to-Speech for Low-Resource Languages
(Oral presentation)

Phat Do (Rijksuniversiteit Groningen, The Netherlands), Matt Coler (Rijksuniversiteit Groningen, The Netherlands), Jelske Dijkstra (Rijksuniversiteit Groningen, The Netherlands), Esther Klabbers (ReadSpeaker, The Netherlands)
We provide a systematic review of past studies that use multilingual data for text-to-speech (TTS) of low-resource languages (LRLs). We focus on the strategies used by these studies for incorporating multilingual data and how they affect output speech quality. To investigate the difference in output quality between corresponding monolingual and multilingual models, we propose a novel measure to compare this difference across the included studies and their various evaluation metrics. This measure, called the Multilingual Model Effect (MLME), is found to be affected by: acoustic model architecture, the difference ratio of target language data between corresponding multilingual and monolingual experiments, the balance ratio of target language data to total data, and the amount of target language data used. These findings can act as reference for data strategies in future experiments with multilingual TTS models for LRLs. Language family classification, despite being widely used, is not found to be an effective criterion for selecting source languages.