so we are presenting here our work using what we can have a four gram

units

using require program known and of course for language identification

so

where you know what we it doing with a regular and recurrent neural network is

to use phonemes as input to see now when i think with indication

begin and that of the number of phonemes

and then we have also incorporated in the context information use in a uniform slide

function trigrams

comparing them and their fusion all of them

and so we are proposing the concatenation of this in a descent phonemes in our

in our system

so this architecture apply to this language and the via some system

is based on phonotactic system i prepare landmark detection

so we have for its phonetic recognisers in the bruno recognizers we obtain a sequence

of phonemes

and in evaluations for each utterance we a compute like an entropy metric provided by

their like the network

and this entropy scores are calibrated than used later

we also present a word but don't with funded hubert representations a it used in

order to reduce the vocabulary in this a neural network using k-means to group a

similar from grants

and we have were with this keeper model at the phoneme level and we had

a relative improvement of seven percent

hence the despicable to read the text

also in the work we present like the study of the most role of and

their right brown report in that no one of course parameters

so here is the list of parameters have been working with

here in their results we can see this you have rate in our database used

in comparing the nice ones the diphones triphones and then we can see a the

fusion of them and the comparison with the work these landed pprlm

and a fusion with that and the standard acoustic system based on mfccs

and

c different portions finally where we can see that a

this approach also provides complementary information so there are a final improvements in our global

system

that's it