InterSpeech 2021

Speech2Video: Cross-Modal Distillation for Speech to Video Generation
(longer introduction)

Shijing Si (Ping An Technology, China), Jianzong Wang (Ping An Technology, China), Xiaoyang Qu (Ping An Technology, China), Ning Cheng (Ping An Technology, China), Wenqi Wei (Ping An Technology, China), Xinghua Zhu (Ping An Technology, China), Jing Xiao (Ping An Technology, China)
This paper investigates a novel task of talking face video generation solely from speeches. The speech-to-video generation technique can spark interesting applications in entertainment, customer service, and human-computer-interaction industries. Indeed, the timbre, accent and speed in speeches could contain rich information relevant to speakers’ appearance. The challenge mainly lies in disentangling the distinct visual attributes from audio signals. In this article, we propose a light-weight, cross-modal distillation method to extract disentangled emotional and identity information from unlabelled video inputs. The extracted features are then integrated by a generative adversarial network into talking face video clips. With carefully crafted discriminators, the proposed framework achieves realistic generation results. Experiments with observed individuals demonstrated that the proposed framework captures the emotional expressions solely from speeches, and produces spontaneous facial motion in the video output. Compared to the baseline method where speeches are combined with a static image of the speaker, the results of the proposed framework is almost indistinguishable. User studies also show that the proposed method outperforms the existing algorithms in terms of emotion expression in the generated videos.