|Tinglong Zhu (Duke Kunshan University, China), Xiaoyi Qin (Duke Kunshan University, China), Ming Li (Duke Kunshan University, China)|
Although deep neural networks are successful for many tasks in the speech domain, the high computational and memory costs of deep neural networks make it difficult to directly deploy high-performance Neural Network systems on low-resource embedded devices. There are several mechanisms to reduce the size of the neural networks i.e. parameter pruning, parameter quantization, etc. This paper focuses on how to apply binary neural networks to the task of speaker verification. The proposed binarization of training parameters can largely maintain the performance while significantly reducing storage space requirements and computational costs. Experiment results show that, after binarizing the Convolutional Neural Network, the ResNet34-based network achieves an EER of around 5% on the ''Voxceleb1'' testing dataset and even outperforms the traditional real number network on the text-dependent dataset: ''Xiaole'' while having a 32× memory saving.