InterSpeech 2021

LT-LM: a novel non-autoregressive language model for single-shot lattice rescoring
(3 minutes introduction)

Anton Mitrofanov (ITMO University, Russia), Mariya Korenevskaya (STC-innovations, Russia), Ivan Podluzhny (ITMO University, Russia), Yuri Khokhlov (STC-innovations, Russia), Aleksandr Laptev (ITMO University, Russia), Andrei Andrusenko (ITMO University, Russia), Aleksei Ilin (STC-innovations, Russia), Maxim Korenevsky (STC-innovations, Russia), Ivan Medennikov (ITMO University, Russia), Aleksei Romanenko (ITMO University, Russia)
Neural network-based language models are commonly used in rescoring approaches to improve the quality of modern automatic speech recognition (ASR) systems. Most of the existing methods are computationally expensive since they use autoregressive language models. We propose a novel rescoring approach, which processes the entire lattice in a single call to the model. The key feature of our rescoring policy is a novel non-autoregressive Lattice Transformer Language Model (LT-LM). This model takes the whole lattice as an input and predicts a new language score for each arc. Additionally, we propose the artificial lattices generation approach to incorporate a large amount of text data in the LT-LM training process. Our single-shot rescoring performs orders of magnitude faster than other rescoring methods in our experiments. It is more than 300 times faster than pruned RNNLM lattice rescoring and N-best rescoring while slightly inferior in terms of WER.