InterSpeech 2021

M3: MultiModal Masking applied to sentiment analysis
(Oral presentation)

Efthymios Georgiou (NTUA, Greece), Georgios Paraskevopoulos (NTUA, Greece), Alexandros Potamianos (NTUA, Greece)
A common issue when training multimodal architectures is that not all modalities contribute equally to the model’s prediction and the network tends to over-rely on the strongest modality. In this work, we present M³, a training procedure based on modality masking for deep multimodal architectures. During network training, we randomly select one modality and mask its features, forcing the model to make its prediction in the absence of this modality. This structured regularization allows the network to better exploit complementary information in input modalities. We implement M³ as a generic layer that can be integrated with any multimodal architecture. Our experiments show that M³ outperforms other masking schemes and improves performance for our strong baseline. We evaluate M³ for multimodal sentiment analysis on CMU-MOSEI, achieving results comparable to the state-of-the-art.