InterSpeech 2021

Super-Human Performance in Online Low-latency Recognition of Conversational Speech
(3 minutes introduction)

Thai-Son Nguyen (KIT, Germany), Sebastian Stüker (KIT, Germany), Alex Waibel (KIT, Germany)
Achieving super-human performance in recognizing human speech has been a goal for several decades as researchers have worked on increasingly challenging tasks. In the 1990’s it was discovered, that conversational speech between two humans turns out to be considerably more difficult than read speech as hesitations, disfluencies, false starts and sloppy articulation complicate acoustic processing and require robust joint handling of acoustic, lexical and language context. Early attempts with statistical models could only reach word error rates (WER) of over 50% which is far from human performance with shows a WER of around 5.5%. Neural hybrid models and recent attention-based encoder-decoder models have considerably improved performance as such contexts can now be learned in an integral fashion. However, processing such contexts requires an entire utterance presentation and thus introduces unwanted delays before a recognition result can be output. In this paper, we address performance as well as latency. We present results for a system that can achieve super-human performance, i.e. a WER of 5.0% on the Switchboard conversational benchmark, at a word based latency of only 1 second behind a speaker’s speech. The system uses multiple attention-based encoder-decoder networks integrated within a novel low latency incremental inference approach.