SuperLectures.com

SPARSE CODING OF AUDITORY FEATURES FOR MACHINE HEARING IN INTERFERENCE

Full Paper at IEEE Xplore

Innovative Representations of Audio

Presented by: Richard Lyon, Author(s): Richard Lyon, Jay Ponte, Gal Chechik, Google Inc., United States

A key problem in using the output of an auditory model as the input to a machine-learning system in a machine-hearing application is to find a good feature-extraction layer. For systems such as PAMIR (passive-aggressive model for image retrieval) that work well with a large sparse feature vector, a conversion from auditory images to sparse features is needed. For audio-file ranking and retrieval from text queries, based on stabilized auditory images, we took a multi-scale approach, using vector quantization to choose one sparse feature in each of many overlapping regions of different scales, with the hope that in some regions the features for a sound would be stable even when other interfering sounds were present and affecting other regions. We recently extended our testing of this approach using sound mixtures, and found that the sparse-coded auditory-image features degrade less in interference than vector-quantized MFCC sparse features do. This initial success suggests that our hope of robustness in interference may indeed be realizable, via the general idea of sparse features that are localized in a domain where signal components tend to be localized or stable.


  Speech Transcript

|

  Comments

Please sign in to post your comment!

  Lecture Information

Recorded: 2011-05-26 09:50 - 10:10, Club D
Added: 15. 6. 2011 08:07
Number of views: 54
Video resolution: 1024x576 px, 512x288 px
Video length: 0:18:54
Audio track: MP3 [6.39 MB], 0:18:54