that's is the improvements on the bottleneck features itself work by some the end from

us to use a couple of the search for a million myself and dinally one

and if you follow us onions work over the past few years to say these

make great gains in using deep bottleneck features for lid

so this particular paper

it extends from some work that is published i think last year just pretty much

this work and he's using the bottleneck features vector and bottleneck layer and fifty features

to create to extract i-vectors what he's doing this is basically

taking out that p gmm of putting in a phonetic mixture french analyses in its

place and what this does is it allows the

single step to do the analysis feature reduction

and a combination they're also on the locks some efficiency gains that allows them to

explore and doing something like sdc with take bottleneck features that is concatenating or extending

the context

time which appears to not quite well

the test is done on lre zero nine with the six most highly confused languages

and he's got some improvement gains and as you'll see if you come to the

poster the improvement is less

alpha three seconds and it is for the longer utterances that's not really surprising but

if you're interested where poster number eleven