Joint Audio Visual Processing

Full Paper at IEEE Xplore

Presented by: Anna Llagostera Casanovas, Author(s): Anna Llagostera Casanovas, Pierre Vandergheynst, Ecole Polytechnique Fédérale de Lausanne, Switzerland

We propose a novel method to automatically detect and extract the video modality of the sound sources that are present in a scene. For this purpose, we first assess the synchrony between the moving objects captured with a video camera and the sounds recorded by a microphone. Next, video regions presenting a high coherence with the soundtrack are automatically labelled as being part of the source. This represents the starting point for an innovative video segmentation approach, whose objective is to extract the complete audio-visual object. The proposed graph-cut segmentation procedure includes an audio-visual term that links together pixels in regions with high audio-video coherence. Our approach is demonstrated on challenging sequences presenting non-stationary sound sources and distracting moving objects.

  Speech Transcript



Please sign in to post your comment!

  Lecture Information

Recorded: 2011-05-24 16:35 - 16:55, Club H
Added: 15. 6. 2011 11:16
Number of views: 16
Video resolution: 1024x576 px, 512x288 px
Video length: 0:18:16
Audio track: MP3 [6.24 MB], 0:18:16