InterSpeech 2021

Augmenting Slot Values and Contexts for Spoken Language Understanding with Pretrained Models
(3 minutes introduction)

Haitao Lin (CAS, China), Lu Xiang (CAS, China), Yu Zhou (CAS, China), Jiajun Zhang (CAS, China), Chengqing Zong (CAS, China)
Spoken Language Understanding (SLU) is one essential step in building a dialogue system. Due to the expensive cost of obtaining the labeled data, SLU suffers from the data scarcity problem. Therefore, in this paper, we focus on data augmentation for slot filling task in SLU. To achieve that, we aim at generating more diverse data based on existing data. Specifically, we try to exploit the latent language knowledge from pretrained language models by finetuning them. We propose two strategies for finetuning process: value-based and context-based augmentation. Experimental results on two public SLU datasets have shown that compared with existing data augmentation methods, our proposed method can generate more diverse sentences and significantly improve the performance on SLU.