InterSpeech 2021

E2E-based Multi-task Learning Approach to Joint Speech and Accent Recognition
(3 minutes introduction)

Jicheng Zhang (Xinjiang University, China), Yizhou Peng (Xinjiang University, China), Van Tung Pham (NTU, Singapore), Haihua Xu (NTU, Singapore), Hao Huang (Xinjiang University, China), Eng Siong Chng (NTU, Singapore)
In this paper, we propose a single multi-task learning framework to perform End-to-End (E2E) speech recognition (ASR) and accent recognition (AR) simultaneously. The proposed framework is not only more compact but can also yield comparable or even better results than standalone systems. Specifically, we found that the overall performance is predominantly determined by the ASR task, and the E2E-based ASR pretraining is essential to achieve improved performance, particularly for the AR task. Additionally, we conduct several analyses of the proposed method. First, though the objective loss for the AR task is much smaller compared with its counterpart of ASR task, a smaller weighting factor with the AR task in the joint objective function is necessary to yield better results for each task. Second, we found that sharing only a few layers of the encoder yields better AR results than sharing the overall encoder. Experimentally, the proposed method produces WER results close to the best standalone E2E ASR ones, while it achieves 7.7% and 4.2% relative improvement over standalone and single-task-based joint recognition methods on test set for accent recognition respectively.