InterSpeech 2021

The Zero Resource Speech Challenge 2021: Spoken language modelling
(3 minutes introduction)

Ewan Dunbar (University of Toronto, Canada), Mathieu Bernard (LSCP (UMR 8554), France), Nicolas Hamilakis (LSCP (UMR 8554), France), Tu Anh Nguyen (LSCP (UMR 8554), France), Maureen de Seyssel (LSCP (UMR 8554), France), Patricia Rozé (LSCP (UMR 8554), France), Morgane Rivière (Facebook, France), Eugene Kharitonov (Facebook, France), Emmanuel Dupoux (LSCP (UMR 8554), France)
We present the Zero Resource Speech Challenge 2021, which asks participants to learn a language model directly from audio, without any text or labels. The challenge is based on the Libri-light dataset, which provides up to 60k hours of audio from English audio books without any associated text. We provide a pipeline baseline system consisting on an encoder based on contrastive predictive coding (CPC), a quantizer (k-means) and a standard language model (BERT or LSTM). The metrics evaluate the learned representations at the acoustic (ABX discrimination), lexical (spot-the-word), syntactic (acceptability judgment) and semantic levels (similarity judgment). We present an overview of the eight submitted systems from four groups and discuss the main results.