ASRU 2011


Efficient Algorithms for Learning Sparse Models from Large Amounts of Data

Yoram Singer (Google Inc.)

We will review the design, analysis and implementation of several sparsity promoting learning algorithms. We start with an efficient projected gradient algorithm onto the L1 ball. We then describe a forward-backward splitting (Fobos) method that incorporates L1 and mixed-norms. We next present adaptive gradient versions of the above methods that generalize well-studied sub-gradient methods. We conclude with a description of a recent approach for "sparse counting" which facilitate compact yet accurate language modeling.

  Osnova

0:00:01

Intro

0:00:04

Classical Sparse Models

0:00:13

Loss & L0 Norm

0:00:25

Subgradients

0:00:34

Two Phase Approach

0:00:46

Folos with L1

0:00:49

Structured Sparsity

0:00:55

Fobos with L?

0:01:01

Fobos with Mixed-Norms

0:01:07

Fobos in High Dimensions

0:01:13

Fobos Results

0:01:28

Problem with Fobos

0:01:40

Efficient Adaptation

0:01:43

Experiments

0:02:01

Thanks & Credits