InterSpeech 2021

Phoneme Recognition through Fine Tuning of Phonetic Representations: a Case Study on Luhya Language Varieties
(3 minutes introduction)

Kathleen Siminyu (Georgia Tech, USA), Xinjian Li (Carnegie Mellon University, USA), Antonios Anastasopoulos (George Mason University, USA), David R. Mortensen (Carnegie Mellon University, USA), Michael R. Marlo (Mizzou, USA), Graham Neubig (Carnegie Mellon University, USA)
Models pre-trained on multiple languages have shown significant promise for improving speech recognition, particularly for low-resource languages. In this work, we focus on phoneme recognition using Allosaurus, a method for multilingual recognition based on phonetic annotation, which incorporates phonological knowledge through a language-dependent allophone layer that associates a universal narrow phone-set with the phonemes that appear in each language. To evaluate in a challenging real-world scenario, we curate phone recognition datasets for Bukusu and Saamia, two varieties of the Luhya language cluster of western Kenya and eastern Uganda. To our knowledge, these datasets are the first of their kind. We carry out similar experiments on the dataset of an endangered Tangkhulic language, East Tusom, a Tibeto-Burman language variety spoken mostly in India. We explore both zero-shot and few-shot recognition by fine-tuning using datasets of varying sizes (10 to 1000 utterances). We find that fine-tuning of Allosaurus, even with just 100 utterances, leads to significant improvements in phone error rates.