SuperLectures.com

EXTENSIONS OF RECURRENT NEURAL NETWORK LANGUAGE MODEL

Full Paper at IEEE Xplore

Language Modeling

Presented by: Tomáš Mikolov, Author(s): Tomáš Mikolov, Stefan Kombrink, Lukas Burget, Jan Cernocky, Brno University of Technology, Czech Republic; Sanjeev Khudanpur, The Johns Hopkins University, United States

We present several modifications of the original recurrent neural network language model (RNN LM). While this model has been shown to significantly outperform many competitive language modeling techniques in terms of accuracy, the remaining problem is the computational complexity. In this work, we show approaches that lead to more than 15 times speedup for both training and testing phases. Next, we show importance of using a backpropagation through time algorithm. An empirical comparison with feedforward networks is also provided. In the end, we discuss possibilities how to reduce the amount of parameters in the model. The resulting RNN model can thus be smaller, faster both during training and testing, and more accurate than the basic one.


  Speech Transcript

|

  Slides

Enlarge the slide | Show all slides in a pop-up window

0:00:16

  1. slide

0:00:36

  2. slide

0:01:38

  3. slide

0:02:48

  4. slide

0:03:47

  5. slide

0:05:49

  6. slide

0:07:35

  7. slide

0:07:59

  8. slide

0:09:15

  9. slide

0:10:02

 10. slide

0:11:50

 11. slide

0:13:00

 12. slide

0:13:48

 13. slide

0:14:57

 14. slide

0:16:16

 15. slide

  Comments

Please sign in to post your comment!

  Lecture Information

Recorded: 2011-05-25 17:15 - 17:35, Club H
Added: 9. 6. 2011 02:13
Number of views: 64
Video resolution: 1024x576 px, 512x288 px
Video length: 0:17:55
Audio track: MP3 [6.12 MB], 0:17:55