SuperLectures.com

MAXIMUM MARGINAL LIKELIHOOD ESTIMATION FOR NONNEGATIVE DICTIONARY LEARNING

Non-negative Tensor Factorization and Blind Separation

Full Paper at IEEE Xplore

Presented by: Onur Dikmen, Author(s): Onur Dikmen, Cédric Févotte, CNRS LTCI / Télécom ParisTech, France

We describe an alternative to standard nonnegative matrix factorisation (NMF) for nonnegative dictionary learning. NMF with the Kullback-Leibler divergence can be seen as maximisation of the joint likelihood of the dictionary and the expansion coefficients under Poisson observation noise. This approach lacks optimality because the number of parameters (which include the expansion coefficients) grows with the number of observations. As such, we describe a variational EM algorithm for optimisation of the marginal likelihood, i.e., the likelihood of the dictionary where the expansion coefficients have been integrated out (given a Gamma conjugate prior). We compare the output of both maximum joint likelihood estimation (i.e., standard NMF) and maximum marginal likelihood estimation (MMLE) on real and synthetical data. The MMLE approach is shown to embed automatic model order selection, similar to automatic relevance determination.


  Speech Transcript

|

  Slides

Enlarge the slide | Show all slides in a pop-up window

0:00:16

  1. slide

0:00:37

  2. slide

0:01:46

  3. slide

0:04:49

  4. slide

0:05:51

  5. slide

0:08:24

  6. slide

0:09:40

  7. slide

0:10:59

  8. slide

0:13:34

  9. slide

0:14:24

 10. slide

0:15:04

 11. slide

0:15:52

 12. slide

0:17:03

 13. slide

0:18:23

 14. slide

0:19:01

 15. slide

  Comments

Please sign in to post your comment!

  Lecture Information

Recorded: 2011-05-26 17:35 - 17:55, Club B
Added: 18. 6. 2011 03:35
Number of views: 16
Video resolution: 1024x576 px, 512x288 px
Video length: 0:23:08
Audio track: MP3 [7.84 MB], 0:23:08