Probabilistic Embeddings for Speaker Diarization
Anna Silnova, Niko Brummer, Johan Rohdin, Themos Stafylakis, Lukas Burget |
---|
Speaker embeddings (x-vectors) extracted from very short segments of speech have recently been shown to give competitive performance in speaker diarization. We generalize this recipe by extracting from each speech segment, in parallel with the x-vector, also a diagonal precision matrix, thus providing a path for the propagation of information about the quality of the speech segment into a PLDA scoring backend. These precisions quantify the uncertainty about what the values of the embeddings might have been if they had been extracted from high quality speech segments. The proposed emph{probabilistic embeddings} (x-vectors with precisions) are interfaced with the PLDA model by treating the x-vectors as hidden variables and marginalizing them out. We apply the proposed probabilistic embeddings as input to an agglomerative hierarchical clustering (AHC) algorithm to do diarization in the DIHARD'19 evaluation set. We compute the full PLDA likelihood `by the book' for each clustering hypothesis that is considered by AHC. We show that this gives accuracy gains relative to a baseline AHC algorithm, applied to traditional x-vectors (without uncertainty), and which uses averaging of binary log-likelihood-ratios, rather than by-the-book scoring.