SuperLectures.com

EXTENSIONS OF RECURRENT NEURAL NETWORK LANGUAGE MODEL

Full Paper at IEEE Xplore

Language Modeling

Přednášející: Tomáš Mikolov, Autoři: Tomáš Mikolov, Stefan Kombrink, Lukas Burget, Jan Cernocky, Brno University of Technology, Czech Republic; Sanjeev Khudanpur, The Johns Hopkins University, United States

We present several modifications of the original recurrent neural network language model (RNN LM). While this model has been shown to significantly outperform many competitive language modeling techniques in terms of accuracy, the remaining problem is the computational complexity. In this work, we show approaches that lead to more than 15 times speedup for both training and testing phases. Next, we show importance of using a backpropagation through time algorithm. An empirical comparison with feedforward networks is also provided. In the end, we discuss possibilities how to reduce the amount of parameters in the model. The resulting RNN model can thus be smaller, faster both during training and testing, and more accurate than the basic one.


  Přepis řeči

|

  Slajdy

Zvětšit slajd | Zobrazit všechny slajdy

0:00:16

  1. slajd

0:00:36

  2. slajd

0:01:38

  3. slajd

0:02:48

  4. slajd

0:03:47

  5. slajd

0:05:49

  6. slajd

0:07:35

  7. slajd

0:07:59

  8. slajd

0:09:15

  9. slajd

0:10:02

 10. slajd

0:11:50

 11. slajd

0:13:00

 12. slajd

0:13:48

 13. slajd

0:14:57

 14. slajd

0:16:16

 15. slajd

  Komentáře

Please sign in to post your comment!

  Informace o přednášce

Nahráno: 2011-05-25 17:15 - 17:35, Club H
Přidáno: 9. 6. 2011 02:13
Počet zhlédnutí: 64
Rozlišení videa: 1024x576 px, 512x288 px
Délka videa: 0:17:55
Audio stopa: MP3 [6.12 MB], 0:17:55