InterSpeech 2021

4-bit Quantization of LSTM-based Speech Recognition Models
(longer introduction)

Andrea Fasoli (IBM, USA), Chia-Yu Chen (IBM, USA), Mauricio Serrano (IBM, USA), Xiao Sun (IBM, USA), Naigang Wang (IBM, USA), Swagath Venkataramani (IBM, USA), George Saon (IBM, USA), Xiaodong Cui (IBM, USA), Brian Kingsbury (IBM, USA), Wei Zhang (IBM, USA), Zoltán Tüske (IBM, USA), Kailash Gopalakrishnan (IBM, USA)
We investigate the impact of aggressive low-precision representations of weights and activations in two families of large LSTM-based architectures for Automatic Speech Recognition (ASR): hybrid Deep Bidirectional LSTM - Hidden Markov Models (DBLSTM-HMMs) and Recurrent Neural Network - Transducers (RNN-Ts). Using a 4-bit integer representation, a naïve quantization approach applied to the LSTM portion of these models results in significant Word Error Rate (WER) degradation. On the other hand, we show that minimal accuracy loss is achievable with an appropriate choice of quantizers and initializations. In particular, we customize quantization schemes depending on the local properties of the network, improving recognition performance while limiting computational time. We demonstrate our solution on the Switchboard (SWB) and CallHome (CH) test sets of the NIST Hub5-2000 evaluation. DBLSTM-HMMs trained with 300 or 2000 hours of SWB data achieves <0.5% and <1% average WER degradation, respectively. On the more challenging RNN-T models, our quantization strategy limits degradation in 4-bit inference to 1.3%.