Adaptation Transforms of Auto-Associative Neural Networks as Features for Speaker Verification
We present a new approach of using Auto-Associative Neural Networks (AANNs) in the conventional GMM speaker verification framework with i-vector feature extraction and PLDA modeling. In this technique, an i-vector feature extractor is trained using adaptation parameters from a mixture of AANNs. In order to model parts of each speaker's acoustic space, a training objective function based on posterior probabilities of broad phonetic classes is used. The AANN based i-vectors are fused with GMM based i-vectors and a joint PLDA model is trained. The proposed approach provides promising results and significant gains when combined with baseline systems on the telephone conditions of NIST SRE 2010 and the recently concluded IARPA BEST 2011 speaker evaluations.