SuperLectures.com

DISTRIBUTED TRAINING OF LARGE SCALE EXPONENTIAL LANGUAGE MODELS

Full Paper at IEEE Xplore

Language Modeling

Přednášející: Bhuvana Ramabhadran, Autoři: Abhinav Sethy, Stanley Chen, Bhuvana Ramabhadran, IBM, United States

Shrinkage-based exponential language models, such as the recently introduced Model M, have provided significant gains over a range of tasks . Training such models requires a large amount of computational resources in terms of both time and memory. In this paper, we present a distributed training algorithm for such models based on the idea of cluster expansion . Cluster expansion allows us to efficiently calculate the normalization and expectations terms required for Model M training by minimizing the computation needed between consecutive n-grams. We also show how the algorithm can be implemented in a distributed environment, greatly reducing the memory required per process and training time.


  Přepis řeči

|

  Slajdy

Zvětšit slajd | Zobrazit všechny slajdy

0:00:16

  1. slajd

0:00:35

  2. slajd

0:01:07

  3. slajd

0:01:58

  4. slajd

0:02:31

  5. slajd

0:03:15

  6. slajd

0:03:44

  7. slajd

0:04:53

  8. slajd

0:06:01

  9. slajd

0:06:22

 10. slajd

0:07:27

 11. slajd

0:08:28

 12. slajd

0:09:17

 13. slajd

0:09:49

 14. slajd

0:10:10

 15. slajd

0:11:32

 16. slajd

0:12:35

 17. slajd

0:13:08

 18. slajd

0:13:38

 19. slajd

0:14:09

 20. slajd

0:14:20

 21. slajd

0:14:28

 22. slajd

0:15:05

    20. slajd

0:15:25

 23. slajd

0:17:07

 24. slajd

  Komentáře

Please sign in to post your comment!

  Informace o přednášce

Nahráno: 2011-05-25 16:35 - 16:55, Club H
Přidáno: 9. 6. 2011 01:58
Počet zhlédnutí: 47
Rozlišení videa: 1024x576 px, 512x288 px
Délka videa: 0:19:16
Audio stopa: MP3 [6.58 MB], 0:19:16