0:00:06so i think today will you be happy to say two different
0:00:09point of view that mathematical point of view topic maybe that one generation
0:00:13i'm not saying that what's all just
0:00:15oh
0:00:16and engineering solution which is maybe fast
0:00:18and the more simple simplify life for us
0:00:21and
0:00:24so
0:00:25the topic today is about how we can make cosine distance scoring because
0:00:29you know buttocks
0:00:30this covers morning that he what is P L D A T don't need any score normalisation
0:00:34so we try to understand
0:00:36what is corn images in the wagon cosine scoring and how we can
0:00:40move the score normalisation from scum space to put a very but the spatial ivector space
0:00:46so the presentation um is
0:00:48uh organised as follows O four so we give an introduction on the contribution of this paper
0:00:53i'll try to define
0:00:55like a text i have to define what stop are very very space what is cosine distance
0:00:59our channel compensation work
0:01:01in this uh space
0:01:03and
0:01:04after that i will show you how how
0:01:06tensed and what score normalisation likes it enormously normal as normal is doing in the
0:01:11score it's gonna go uh design the cosine distance scoring
0:01:14and how we show you how we develop new scoring that
0:01:18not any score normalisation with it still there but we we just move it to the probability space
0:01:23and leave some experiment and result
0:01:25and
0:01:26finally give a conclusion
0:01:29so
0:01:30i recently would allow
0:01:31so new delhi motion speaker presentation
0:01:35we should make a lot of a lot easier for us you know
0:01:38it because now we just open the water pollution working can you can try i'll try P L D A
0:01:42wanted to the egg
0:01:43and it can be fatal
0:01:45with with this in this new dimension of the space
0:01:47so
0:01:48and we also there well
0:01:50i cosine scoring
0:01:51they don't need any target ornaments
0:01:53just to
0:01:54X X I
0:01:55i vector over time factors for target them
0:01:58yes
0:01:58and compute cosine distance and compile this threshold
0:02:01it's very easy
0:02:03is not complication there
0:02:04that's so this make the decision
0:02:07faster
0:02:08simple
0:02:08clacks complex
0:02:10there's no scatter
0:02:12so
0:02:13but
0:02:13yeah coming here we need spinach score normalisation
0:02:16so the with the wheel is it a normal as normal because in the new version of the system i
0:02:20use as normal so
0:02:22we did we did see no need that so
0:02:24so i try
0:02:25in this paper
0:02:26tensed and
0:02:27one score normalisation is doing
0:02:29and the cosine distance
0:02:30and
0:02:31how
0:02:32i can't assimilate
0:02:34this uh kind of scoring in the clean up but right
0:02:36space without going
0:02:38the score space
0:02:39so
0:02:40this is the thought that would talk on in this uh this part and
0:02:44what we did that you just want to do some speaker adaptation using cosine distance but
0:02:49i would not talk about in the paper you flip on some result
0:02:52but stephen was next
0:02:53presenter will talk about
0:02:55so we if you have any question about speak answer why that nation you can talk to him not to
0:02:59be
0:03:01okay
0:03:01so
0:03:03so if you
0:03:04now everyone down here now that C F A try to split
0:03:08in the general supervectors
0:03:10in two parts
0:03:11uh one part is the speaker space
0:03:13okay
0:03:14and the second is a sorry
0:03:16that's so
0:03:16with the first part is because by the second part of channel space
0:03:19so
0:03:21two years ago when we was engine option interested what's it is to watch a thousand eight
0:03:25we try to see
0:03:26the efficiency of it and he'd of for every line variable
0:03:30like speaker space common space and channel space
0:03:33so we
0:03:33take every component of this
0:03:35jfa and we put that in
0:03:38i don't was not much for support vector machine
0:03:40and we use cosine distance to see the performance
0:03:43so
0:03:44what was surprising that eigenchannel of this site the channel factors when we put it to to scrub the machine
0:03:50we are then to having black decorate
0:03:52fifty percent because
0:03:53normally channel factor don't contain speaker information
0:03:56we find that
0:03:56we have
0:03:57and incorrect of twenty
0:03:59so it means that
0:04:00affirmation that we are losing
0:04:01in this channel factors
0:04:03so in order to restore maybe
0:04:06maybe be a moron could say that but
0:04:08to minimise the impact
0:04:09of
0:04:10this information that we are losing the speaker factor
0:04:13the idea of the pen factors comes that
0:04:16so
0:04:16total factor was born in
0:04:18but also a tangent hopkins university
0:04:21so
0:04:22and what we did is just
0:04:23which
0:04:24although that had been built
0:04:25separate speaker space and channel space with one one
0:04:28once again space
0:04:29at what which model but
0:04:31speaker and channel variability
0:04:32i do recall that the real
0:04:35and
0:04:35so when we have a target
0:04:37and
0:04:38yeah that's
0:04:39which is
0:04:40got project
0:04:41but together
0:04:42in the shop and maybe space and we can just compute the cosine distance there
0:04:47so
0:04:48so the one that we use that are very pretty and what is different between speaker space like eigenvoices and
0:04:53put on my ability
0:04:54for the uh for in our case
0:04:56so
0:04:57for the egg and a voice for the jfa
0:04:59if like for additional for speaker
0:05:02for for speaker always recording
0:05:04it is seen as
0:05:05same speaker
0:05:07so we put all the work on it together
0:05:08for a bit of a space is the opposite so
0:05:11four
0:05:12for each recording of the same speaker
0:05:14is seen as a different speaker
0:05:16so we want try to model but speaker and channel variability
0:05:19the only thing
0:05:20so if you have the eigenvoice
0:05:22algorithm is the same things you should
0:05:24the same
0:05:24use the same are good for but
0:05:26just the list is different
0:05:28okay
0:05:29so for this
0:05:29for the eigenspace we put the data from the same speaker in the same
0:05:32five and four but maybe two speakers
0:05:35it's five
0:05:36is recording slightly stated
0:05:37different speaker
0:05:39so there's a different way to estimate
0:05:41no i can eigenvoice okay or or maybe
0:05:45so
0:05:46the relevance map so we can use
0:05:47for each recording estimate
0:05:49general supervectors by map adaptation marilyn smart
0:05:53and then compute pca
0:05:55okay
0:05:55and you know again with eigenvoice map map adaptation with a with a gmm supervectors not observable and we would
0:06:01yeah my good too
0:06:02we estimate
0:06:03all this
0:06:03for every day
0:06:04so
0:06:05why we are using that eigenvoice i think because like
0:06:09in what was in G out in a p2p uh some happen
0:06:12university for the workshop
0:06:14some people from you to try
0:06:15different kind if i'm not wrong
0:06:17like my and everyone smart and again mathematician for
0:06:21speaker true speaker factors training
0:06:23and we find that the best is
0:06:25eigenvoice maybe i'm wrong
0:06:27you can confirm after
0:06:28um
0:06:30so and also
0:06:31eigenvoice and is known to be more power for for short duration
0:06:35so maybe it's explain why are very weak in this case given a better result than
0:06:39irvine smart
0:06:42so
0:06:44what do we have targets
0:06:45speech or a target recording and test recording so we estimate this
0:06:49but i'm very beating up the factors
0:06:52uh vectors and
0:06:54which is to compute
0:06:55a cosine distance scoring between the two
0:06:57vectors
0:06:58okay
0:06:58so
0:06:59um and then competitor shot so you don't have to do channel compensation so i just
0:07:04uh first do lda to do some dimmers introduction and
0:07:08to maximise the speaker and minimised and stuff
0:07:11it wouldn't cost
0:07:12within class
0:07:13with the speaker variability sorry
0:07:15and updated obvious to see and to do some kind of normalisation
0:07:19in the
0:07:19in the little be much less of a node initially rate no timit
0:07:22space
0:07:23okay
0:07:25so
0:07:26linear so
0:07:27and the is
0:07:29it just like uh i'm gonna metric is defined by solving this generalised eigenvalue so between
0:07:34we use a bit with speaker
0:07:36viability and within speaker variability
0:07:39um i think
0:07:40my sister
0:07:41so here there's only one remark that they need to put up
0:07:44uh in the first version of the
0:07:46but the cosine distance i say that
0:07:48the mean of all all this
0:07:50speakers is equal to zero because they have normal
0:07:53from the distribution
0:07:54but for the top of factors
0:07:56but in this work
0:07:57i've
0:07:57i
0:07:58but it
0:07:58like i'd estimated
0:08:00so i think i need to show that
0:08:02i need to compute it
0:08:03because
0:08:04i find some like
0:08:05problem with the
0:08:06new scoring when i don't estimated that
0:08:11so
0:08:12for the data to C N what we do is
0:08:15after estimating lda would project all our background in this
0:08:19slowly much of the space which is
0:08:20we move from four hundred to two hundred
0:08:23and after
0:08:24we also the same background but not the same but all you make sure that the data to estimate
0:08:28adaptive this year in two hundred space
0:08:31so it's a
0:08:32because for his W C C N is applied
0:08:34sorry
0:08:35right now
0:08:36one
0:08:37the basis is applied in the projected space
0:08:39oh
0:08:40and the a okay
0:08:42so it's not
0:08:42the origin of space
0:08:45so here's some kind of
0:08:47visualisation of
0:08:48all the steps where all this kind of stuff so
0:08:51is this five
0:08:52speaker so is colour is this one speaker
0:08:54and if one is uh one recording for or speaker
0:08:58so that's is five
0:09:00female speaker
0:09:01so this is after lda projection
0:09:03into the emotional
0:09:05okay
0:09:06so
0:09:06if you know the other C C N
0:09:08so
0:09:09is it the same scatter
0:09:11if you have the same here as in black scale
0:09:13so we are minimising the intraspeaker variability
0:09:17and when you do
0:09:18W
0:09:19let normalisation of course sciences
0:09:21course going
0:09:22you are going in the spherical area
0:09:24here
0:09:24so here the speaker one who the speaker to interspeaker tape
0:09:29so this is why
0:09:30to find out what what what what a fine
0:09:33like
0:09:33how how about explained this morning about the dissertation so all this
0:09:37data on the same
0:09:38fig
0:09:39yeah
0:09:43so
0:09:44this is the
0:09:46jack i'm off
0:09:47that of brevity system
0:09:49so
0:09:50when
0:09:50you have a look
0:09:51not a lot of we've first we use a lot of nontarget speaker
0:09:54like a lot of
0:09:55lot lot of speaker whatsoever a recording for speakers
0:09:58and
0:09:59i use mfcc extraction i used to be into uh yeah my going to attain a B M
0:10:04and after extract
0:10:05the the what what statistical here
0:10:08for all the same
0:10:10sorry
0:10:11all the same recording
0:10:12and after a change of
0:10:13max i tried to to train data but maybe two metrics
0:10:16and then
0:10:17here extract ivectors
0:10:19for all this
0:10:20uh
0:10:21uh recording and then
0:10:23i estimate and the N W C C S of his on the interview C N
0:10:26it's not obedience ubm
0:10:28so what i have a target
0:10:30okay set according so
0:10:31i just extract mfccs and the U D B M to excitable what statistic here
0:10:35and upon my be extracted uh factors
0:10:38and then
0:10:39uh it was only and the innovation to normalise
0:10:42the the the the the the the new
0:10:44new vectors
0:10:45okay
0:10:45so when you have that yeah
0:10:47we're the same person
0:10:48and to getting that of a matrix
0:10:50uh that's right at the top of factors
0:10:52and then projected indiana B C C N and that can
0:10:55and uh compute the cosine distance and make a final decision
0:11:00so now
0:11:02uh
0:11:03i'll explain
0:11:04what score normalisation is doing again
0:11:06and the space
0:11:07okay
0:11:07in this
0:11:08what what's gonna musician that we can get cosine distance scoring
0:11:10so let me simplify some questions so this is like
0:11:14cosine distance scoring first
0:11:15okay
0:11:16so let's use
0:11:18like we call that a five normalised above factors which is the projection of
0:11:22lda
0:11:23and uh some ski decomposition of the within class parameterisation
0:11:27so
0:11:28and normalised by the land
0:11:30so
0:11:30in this case
0:11:31cosine distance 'cause we can just
0:11:33on the product
0:11:35okay
0:11:36so just
0:11:36i just want to simplify
0:11:37have a dot product okay
0:11:39so
0:11:41so
0:11:41this is
0:11:42you can see all this
0:11:44like maybe
0:11:45because we've with the first paper we say that W well opening W is feature extraction
0:11:50so we can see also all this as a double as a feature extraction
0:11:53'cause you do it
0:11:54such a compensation
0:11:55and of course i became or just
0:11:57a dot product
0:11:59so
0:12:00no
0:12:01if you have you want to see that
0:12:02who started that someone's you know so we have a target speaker and the set of you know utterance okay
0:12:07so we
0:12:08or is it turns you extract the proposed factors
0:12:11okay
0:12:12and need to compute
0:12:13the main
0:12:15come the scores
0:12:16the mean of the scores
0:12:17and the standard deviation of the schools okay
0:12:19so
0:12:20i tried to say
0:12:21how to how what is
0:12:22the mean and so another innovation is doing
0:12:24okay what the what is that what it's got what's the value that
0:12:27so i try
0:12:28display so
0:12:29is it a
0:12:30it so it
0:12:30for every
0:12:32is that you know impostors
0:12:33i tried to spit in schools
0:12:35okay just the product between target
0:12:37and uh posters
0:12:38and
0:12:39it's divided by and this is the main
0:12:41okay
0:12:42so
0:12:43the target speakers
0:12:44if you to simplify that you take this
0:12:46oh
0:12:47it's just
0:12:48the product
0:12:49with win
0:12:49target
0:12:50unnormalised eigenvectors
0:12:52and
0:12:52the mean of
0:12:54yeah posters you know about that the normalised factor
0:12:58okay
0:12:59so this is the me
0:13:00okay
0:13:01so
0:13:02and this is the um posters
0:13:04uh no multiple vectors means
0:13:06okay so and and the number
0:13:09of
0:13:09and posture for the teen forms you know
0:13:11so if you see for standard deviation
0:13:14you do the same price
0:13:14process
0:13:15you have this
0:13:17scores
0:13:17four
0:13:19for the between target and impostors you knows
0:13:22and is it the meeting which is
0:13:24exactly this one
0:13:26okay
0:13:27so the but product between the two
0:13:29almost i get uh to to normalise
0:13:32uh target speakers
0:13:34and the impostors
0:13:35i mean
0:13:37and if you to go if we take
0:13:39the
0:13:39they're not targeting the target
0:13:41oh
0:13:42so here
0:13:42you can see this is
0:13:44the covariance matrix
0:13:45all the
0:13:47of the yeah uh apostasy no
0:13:50okay
0:13:50so
0:13:51score normalisation which is you know
0:13:53is just
0:13:54no if you
0:13:55you're trying to but in the
0:13:57a question of
0:13:58how do score normalisation
0:13:59it's just
0:14:00shifting
0:14:02the task
0:14:03normalisation by the mean
0:14:05all the impostors
0:14:06and the week another that normalisation
0:14:08but this time normalisation is base it only
0:14:11oh uh between class
0:14:13impostor
0:14:15okay this is an apostle so
0:14:17this is mean that we are going for the for this you know if i want to do is you
0:14:20know what do in another that normalisation that
0:14:23the direction is base it on
0:14:25maximising the distance between
0:14:28a poster
0:14:30okay
0:14:31in a similar way
0:14:34you can find
0:14:36that you know so you know
0:14:37is
0:14:38this you know example is shifting the test
0:14:40you know is
0:14:41shifting the body
0:14:43where the me
0:14:44and doing
0:14:44that that normalisation of the test
0:14:46you know is doing
0:14:48that minimises on the target
0:14:50do you know was doing that normalisation of the best
0:14:52with some kind of covariance
0:14:55between a poster
0:14:56so
0:14:58we will
0:15:00new scoring
0:15:01one assuming ideal which is not is not exactly easy to you know
0:15:05it would save you just amaze you can also
0:15:08we shift
0:15:09target
0:15:11we we we was like some background of impostors and we compute the mean of that
0:15:15and we shift the target
0:15:17that's why we should target
0:15:19here
0:15:20and normalised target
0:15:21that's done factors and also for the test by the impostor
0:15:25means
0:15:25and
0:15:26no my the bottom end of the test and
0:15:29the target a
0:15:30based on
0:15:31between awaiting covariance
0:15:33a posters
0:15:36so
0:15:36another one
0:15:38uh is that some of that
0:15:39i think he was and uh secondary anyway factories newspaper notice it
0:15:44doesn't then
0:15:45so it's as well and
0:15:47this in this case for us and all this is exactly that's not
0:15:51we well because what as women doing this may be seen on a systematic it's going that was eating omitting
0:15:56always the same
0:15:57so
0:15:57it's for the target shifting the task
0:16:00and normalising by the target here it shift in the target tantalising but this
0:16:04so this is exactly as well so we can do as well
0:16:07without any
0:16:08all windy per parameter estimation just
0:16:10not about maybe the space
0:16:12so
0:16:13this
0:16:14kind of
0:16:15it's going
0:16:15have a lot to speed up the process more
0:16:18so the only just compute the cosine distance so now we can do it as you know uh
0:16:22maybe seem easy to you know or
0:16:24complete as long
0:16:25in this paper maybe this paper
0:16:30so then do some experiments
0:16:32so
0:16:33we used two thousand forty eight abortions
0:16:35with the motion to sixty like we have ninety percent of T C as you know jeepers that of that
0:16:40of that
0:16:40is an old system that they have i don't
0:16:42do you need a date for that
0:16:44right
0:16:44i did or both horizontal so it doesn't and um
0:16:47sorry for that
0:16:48so is four hundred benefactors
0:16:50lda reduced a hundred
0:16:52and the basis in is applied in two hundred space
0:16:55and
0:16:56use some kind of one of our one thousand you norm
0:16:59and two hundred yet you know
0:17:01for as normal we use all overcome by all the apostle together
0:17:05and
0:17:06for the uh
0:17:08for the mean and the covariance of the new scoring
0:17:11we use
0:17:11all together all the impostor together
0:17:14but we use diagonal covariance matrix for the impostors just
0:17:17to speed up the process and make an experiment
0:17:19we can use the force to
0:17:22so here
0:17:24a lot of people ask me how you but at a very poor spatial trying to build this
0:17:28that table to show
0:17:30how can train your
0:17:31lda and where
0:17:32which database
0:17:33so
0:17:34for the A B M we use switchboard
0:17:36uh
0:17:37fig switchboard about senior and uh landline
0:17:40uh we use discover four and five
0:17:43what about a bit that we use all the data
0:17:45so what's the type that you have more of it is is it
0:17:48and
0:17:49use like
0:17:50minimum speaker that have to recording
0:17:52to be the order of a matrix
0:17:55okay this is the first time of the sixty to use fisher data to in the factorises because patrick died
0:17:59in the past with the jfa and he that have success
0:18:02with that
0:18:03um
0:18:04lda i use
0:18:06switchboard and nist and four and five
0:18:09and because i tried to model this
0:18:11but with speaker variability so we need more speakers
0:18:14for them use this year was surprising that
0:18:16i i found that the best result is only for two of the four and five
0:18:20maybe because
0:18:21in which are data we have this kind of speaker
0:18:23speaking different
0:18:25their phone numbers and telephone
0:18:26compared to switchboard
0:18:28i'm not maybe
0:18:29this is what we need only make two thousand four and five
0:18:33okay so this is the uh
0:18:36the uh uh there's a lot so
0:18:39i tried to sit and core condition
0:18:41uh often times eight
0:18:43result only female part
0:18:45uh portion
0:18:46so
0:18:47i just want to compare that
0:18:49the score normalisation is working here
0:18:50i forget to put this score without score normalisation sorry
0:18:53uh
0:18:54so
0:18:55this is the origin of scoring
0:18:57like
0:18:57go find the so was it you know
0:18:59as a group we should in the past
0:19:00and uh
0:19:02when you do a new
0:19:03like
0:19:04is uh
0:19:04and use it you know which image it or not
0:19:06it's
0:19:07you would most like to but incorporates your point five an absolutely great
0:19:11but
0:19:12um
0:19:13within this year why the same
0:19:15there's not very basic that improvement
0:19:17however for all try we have some kind of a job because he's english trials and his all time when
0:19:23we have
0:19:23different languages
0:19:25and
0:19:25here
0:19:26this year the accord and this you have was good very good
0:19:29in this new city knobs units scoring
0:19:32okay
0:19:32so
0:19:33it's nice norm
0:19:34it's quite this
0:19:35competitive results
0:19:36and getting better result in all tries applied to do
0:19:39like original scoring
0:19:41and
0:19:42so it seems like we can do score normalisation in this
0:19:45in the above a vector space so there's no problem for that
0:19:50so this is intense again that's again the results
0:19:52so
0:19:53here
0:19:54big
0:19:54i like to uh okay
0:19:56i like
0:19:57um
0:19:58core condition
0:19:59we find that
0:20:01it's had a lot here
0:20:02it's it's improving the performance
0:20:04uh not for the dcf patrol decorate
0:20:07and also for this you have all trials
0:20:09and that's that's not what was doing very well here in the second uh second
0:20:13compared to the core condition
0:20:15so
0:20:16and the conclusion
0:20:18so
0:20:19for this paper i try to uh
0:20:21simplify life
0:20:22again
0:20:23by making the score normalisation and a very this space
0:20:27so which makes the process more simple and more fast
0:20:30if you want to try to optimise the or
0:20:31cosine the some scoring
0:20:33and
0:20:34we do it for
0:20:35for the purpose of doing some speak and some adaptation
0:20:37no that's not that's not up to date a parameter of the
0:20:41but the the that you know how much you know
0:20:43and the answer but adaptation
0:20:45so
0:20:45stephen was talking more about
0:20:47after the start
0:20:48and thank you
0:20:58distance for magazine
0:21:10occlusion
0:21:11um like you say
0:21:13right
0:21:13yes
0:21:15oh
0:21:16scroll through
0:21:19oh
0:21:21uh
0:21:22hmmm
0:21:24yeah
0:21:25uh
0:21:26no
0:21:27yeah
0:21:27oh
0:21:28where
0:21:29so
0:21:30just
0:21:30true
0:21:31yeah
0:21:32uh
0:21:34use power
0:21:35or something
0:21:36sure
0:21:37uh the the uh was
0:21:39oh boy
0:21:41most
0:21:42yes
0:21:43okay
0:21:44the question
0:21:45is that
0:21:46right
0:21:46oh
0:21:47most
0:21:49yes
0:21:50oh
0:21:51so
0:21:54well
0:21:55so
0:21:57okay
0:21:58no
0:21:58yeah
0:21:59the most
0:22:00yeah
0:22:01uh
0:22:01hmmm
0:22:02some
0:22:03right
0:22:05you know
0:22:07so if you can
0:22:09hmmm
0:22:10yeah
0:22:11most
0:22:12maximum normalisation
0:22:13such that
0:22:14hmmm
0:22:15so
0:22:16school
0:22:18so
0:22:24but the point
0:22:25okay so
0:22:26so one of them so
0:22:28selecting uh
0:22:29emphasising you space
0:22:31based on the different
0:22:32right
0:22:34which is
0:22:35uh_huh and i'm wondering if you could modify
0:22:37the normalisation approach
0:22:40but you
0:22:40you know
0:22:42posted
0:22:42yeah
0:22:43you could modify
0:22:44such that
0:22:45who are loosely coupled
0:22:47okay
0:22:47function
0:22:49score
0:22:51that's
0:22:52that's can be
0:22:53good point here because
0:22:55ah
0:22:55this length normalisation
0:22:59okay if i try to do
0:23:02if i tried and stand like
0:23:04for example for that you know so
0:23:06i try to be okay
0:23:07i
0:23:07when i did cosine distance away by doing a D N W C A so i removing some i'm removing
0:23:13the within class
0:23:14but here pentium okay
0:23:15do wanna do that normalisation and take to the the reformation of
0:23:18maximising this year between speaker
0:23:21it can be seen as a between speaker via the the map quest metric
0:23:25so
0:23:26it seems like
0:23:27i am quite losing information between speakers
0:23:30but with a the other basis ian
0:23:32that's true
0:23:33when i see this kind of things
0:23:34it seems like
0:23:35i am doing something
0:23:37that it hurt me
0:23:40yeah right but this is a good point
0:23:42that the basis yes or no
0:23:44it's like
0:23:45we have a nice
0:23:46a dog
0:23:47here
0:23:48like at the end of it is it is a project that i'll do it again
0:23:51but this may this
0:23:52all the way that
0:23:52i need to
0:23:53no interaction of
0:23:55the the speaker
0:23:57right
0:23:58so
0:23:59i don't know how to do that yet
0:24:00because like
0:24:03looks to zero
0:24:03huh
0:24:04this is an excellent
0:24:05yes
0:24:06i
0:24:07okay
0:24:12i have a comment regarding decency kristin i try to
0:24:16do length normalisation for the eighteen
0:24:19the B C C N
0:24:20actually
0:24:22it had
0:24:22yeah
0:24:23before the division
0:24:24i i i just do it length normalisation
0:24:27before
0:24:28it's peoples
0:24:29then they do then the W C C
0:24:32i tried but they didn't have
0:24:34so we have
0:24:36a way to talk
0:24:37i try and
0:24:38i think more one that also try it
0:24:40you tried one but
0:24:42no yes and
0:24:44the funds not having a
0:24:47and the
0:24:48yeah
0:24:58a quick question
0:24:59um
0:25:01so
0:25:01so
0:25:03cool
0:25:03hmmm
0:25:04you know
0:25:06um
0:25:06and
0:25:08so and so i mean
0:25:11and
0:25:12the current remotes for
0:25:14what
0:25:15this is gonna prove most most
0:25:19yeah
0:25:20where
0:25:21um i'm just wondering you know
0:25:25hmmm
0:25:25is that
0:25:26you know
0:25:27and the the actual models
0:25:30yeah
0:25:31cluster
0:25:31but
0:25:33um
0:25:34um
0:25:35like you
0:25:35so like you used to i'm just
0:25:38you know i said oh i see is it really
0:25:41this is not exactly is it you know
0:25:42i don't know so i just wanna have your
0:25:45oh well
0:25:46where
0:25:47the mean and variance
0:25:49when
0:25:50i mean
0:25:51is used them go denotes the
0:25:54yeah
0:25:54proof
0:25:55posted
0:25:55but
0:25:56hmmm
0:25:57and you don't actually need
0:26:00or
0:26:00hmmm
0:26:01you wanna try to do you know in this new scoring or would you
0:26:04no no i'm just
0:26:05well you you have you mean in your room
0:26:08hmmm
0:26:09and i mean and number and you were
0:26:12computing the number one
0:26:13the
0:26:15i mean and variance
0:26:17and
0:26:18right
0:26:19uh_huh
0:26:20okay
0:26:21um
0:26:32yeah so i don't know uh when when you do that you know which is the process
0:26:36yes
0:26:37yes and you explain to
0:26:39yeah
0:26:39so i'm just wondering where
0:26:41when
0:26:41my the
0:26:42system is calibrated and also the units
0:26:45you mean
0:26:46okay that's good that the ah
0:26:48i try to understand what the third one is doing in the middle but i never six said to
0:26:54yeah i know
0:26:55i know
0:26:55i know and it's uh
0:26:57i
0:26:59i never sick said to do that but i tried to see if my
0:27:02system is not
0:27:03is
0:27:03is
0:27:04if you compare the result is not
0:27:06what the same as it you know
0:27:08the only nicole rate that change a little bit
0:27:10but anyways
0:27:11scene
0:27:11that they have it that's not what calibre distill what kind of it
0:27:14but
0:27:15but
0:27:16not good
0:27:18if you have any comment about how we can put the third part i will be happy to
0:27:29because like i did it i did it as normal
0:27:32because i needed in the new version of the system but
0:27:34did you know mine
0:27:36i had to start but i don't know how to do it
0:27:41and here
0:27:42uh if you are in this one comment if you are doing like
0:27:46max for example we have training and the telephone and
0:27:49make a but that's in the microphone
0:27:50so we can do this different
0:27:53based on which database are using
0:27:55so which can help you
0:27:57in the crosstalk
0:27:58uh not to construct
0:27:59costs uh channel
0:28:02right
0:28:06thank you very much larger than here