0:00:15that's is the improvements on the bottleneck features itself work by some the end from
0:00:21us to use a couple of the search for a million myself and dinally one
0:00:26and if you follow us onions work over the past few years to say these
0:00:32make great gains in using deep bottleneck features for lid
0:00:38so this particular paper
0:00:41it extends from some work that is published i think last year just pretty much
0:00:47this work and he's using the bottleneck features vector and bottleneck layer and fifty features
0:00:53to create to extract i-vectors what he's doing this is basically
0:01:01taking out that p gmm of putting in a phonetic mixture french analyses in its
0:01:06place and what this does is it allows the
0:01:10single step to do the analysis feature reduction
0:01:14and a combination they're also on the locks some efficiency gains that allows them to
0:01:20explore and doing something like sdc with take bottleneck features that is concatenating or extending
0:01:28the context
0:01:30time which appears to not quite well
0:01:34the test is done on lre zero nine with the six most highly confused languages
0:01:39and he's got some improvement gains and as you'll see if you come to the
0:01:44poster the improvement is less
0:01:47alpha three seconds and it is for the longer utterances that's not really surprising but
0:01:52if you're interested where poster number eleven