InterSpeech 2021

Segmental Contrastive Predictive Coding for Unsupervised Word Segmentation
(3 minutes introduction)

Saurabhchand Bhati (Johns Hopkins University, USA), Jesús Villalba (Johns Hopkins University, USA), Piotr Żelasko (Johns Hopkins University, USA), Laureano Moro-Velázquez (Johns Hopkins University, USA), Najim Dehak (Johns Hopkins University, USA)
Automatic detection of phoneme or word-like units is one of the core objectives in zero-resource speech processing. Recent attempts employ self-supervised training methods, such as contrastive predictive coding (CPC), where the next frame is predicted given past context. However, CPC only looks at the audio signal’s frame-level structure. We overcome this limitation with a segmental contrastive predictive coding (SCPC) framework that can model the signal structure at a higher level e.g. at the phoneme level. In this framework, a convolutional neural network learns frame-level representation from the raw waveform via noise-contrastive estimation (NCE). A differentiable boundary detector finds variable-length segments, which are then used to optimize a segment encoder via NCE to learn segment representations. The differentiable boundary detector allows us to train frame-level and segment-level encoders jointly. Typically, phoneme and word segmentation are treated as separate tasks. We unify them and experimentally show that our single model outperforms existing phoneme and word segmentation methods on TIMIT and Buckeye datasets. We analyze the impact of boundary threshold and when is the right time to include the segmental loss in the learning process.