First investigations on self trained speaker diarization
| Gaël Le Lan, Sylvain Meignier, Delphine Charlet, Anthony Larcher |
|---|
This paper investigates self trained cross-show speaker diarization applied to collections of French TV archives, based on an i-vector/PLDA framework. The parameters used for i-vectors extraction and PLDA scoring are trained in a unsupervised way, using the data of the collection itself. Performances are compared, using combinations of target data and external data for training. The experimental results on two distinct target corpora show that using data from the corpora themselves to perform unsupervised iterative training and domain adaptation of PLDA parameters can improve an existing system, trained on external annotated data. Such results indicate that performing speaker indexation on small collections of unlabeled audio archives should only rely on the availability of a sufficient external corpus, which can be specifically adapted to every target collection. We show that a minimum collection size is required to exclude the use of such an external bootstrap.
The video player requires Flash plugin or HTML5 video/mp4.
1.0x
| |||
| Abstract: | This paper investigates self trained cross-show speaker diarization applied to collections of French TV archives, based on an i-vector/PLDA framework. The parameters used for i-vectors extraction and PLDA scoring are trained in a unsupervised way, using the data of the collection itself. Performances are compared, using combinations of target data and external data for training. The experimental results on two distinct target corpora show that using data from the corpora themselves to perform unsupervised iterative training and domain adaptation of PLDA parameters can improve an existing system, trained on external annotated data. Such results indicate that performing speaker indexation on small collections of unlabeled audio archives should only rely on the availability of a sufficient external corpus, which can be specifically adapted to every target collection. We show that a minimum collection size is required to exclude the use of such an external bootstrap. | ||
|---|---|---|---|
| Recorded: | 2016-06-22 08:55 - 09:20 | ||
| Stats: | This video recording was viewed 26 times and well-replayed 9.67 times. People spent a total of 4.11 hours watching it. On average they each watched 5.88 minutes. That is 22.72 % of the video. The overall drop-out rate is 46.15 %. Most often this video recording was viewed at superlectures.com. | ||
| Sign up for the stats newsletter: | |||
| Daily audience | |||
| Audience engagement | |||
| Thumbnail: | |||
Loading player
Speech Transcript
0:00:15i everyone my name is again and i'm working with orange labs and the value
... in france
... and then i'm going to talk about the concept of self training speaker diarization
... so the application we don't working on is
... the task of across recordings because data traditional applied on t v archives french t
... 0:00:41v archives
... and the goal is to index to spew costs of collections of multiple recordings
... in order for example two provides new mean of dataset exploration and by creating links
... between different it is so it's
... so a system is based on a two-pass approach we first
... 0:01:04process each recording separately applying some kind of speaker segmentation and clustering
... and then we perform a cross recording a speaker linking and try to link all
... within recording clusters
... across the whole collection
... so they're framework is based on the state-of-the-art speaker recognition
... 0:01:28framework
... we are using i-vector of the lda model edition and for clustering we use the
... article agglomerative clustering
... so we know that the lda the goal of the lda is to maximize the
... between speaker variability one minute
... 0:01:46minimizing the within speaker variability
... so what we want to
... investigate in our paper is can we use the target that a as training material
... and how good
... could we estimate the speaker variability
... 0:02:07so first i'm going to represent
... battery different from work so let's take a an audio file phone problem
... from a target data
... our target that is unable so we just have a audio files
... first we are extracting some features we are using a mfcc features with delta and
... 0:02:27delta-delta
... then we perform a combination of speech activity detection and bic clustering to extract some
... speakers segments
... on top of those segments we can extract i-vectors using pre-trained ubm and total variability
... matrix
... 0:02:49once we obtain a well i-vectors a reliable to score all i-vectors between each other
... and computer similarity scoring matrix
... and for that we use p lda likelihood the
... each are trained the p lda parameters are estimated separate
... once we have or similarity matrix we can apply a speaker clustering
... 0:03:15and do you results of the that are just and is a speaker clusters
... so we can repeat the process for is of all recordings
... once we've done that we can compute
... a collection why the similarity matrix and repeat the clustering process and this time i
... call it the speaker i'm thinking big because the goal is to
... 0:03:40link the within recording clusters across the whole collection
... and after the linking a park
... after the linking part we obtain a the degradation
... so the usual way of training the ubm t v matrix and estimate the plp
... of parameters is to use
... 0:04:03trained that that's that which is labeled based you can and the training procedure is
... pretty straightforward
... the problem when we
... apply this technique we have some kind of mismatch between a target and trained that
... the first we don't have the same acoustic conditions
... 0:04:25and seconds we don't necessarily have the same speakers
... in target and trained that also
... we could use a information about the target that a maybe we could have better
... results
... so what we want to investigates is the concept of self training there is there
... 0:04:43some meaning we like to only use the target that itself to estimate the parameters
... and then we are going to complete to the results with a combination of target
... and trained that
... so
... the goal of sell train data revisionist to avoid the acoustic mismatch between the training
... 0:05:07and target data
... so
... what we need to train an i-vector p lda system to train the ubm and
... the tv matrix we only need a clean speech segments the training is then straightforward
... and as for the lda parameters estimation we need several sessions by post you got
... 0:05:27in various acoustic conditions so
... what we need to investigates is do we have several speakers
... appearing in different it is that's you know what target data
... and assuming we know how to effectively cluster of the target data in terms of
... speaker can we estimate p lda parameters with those
... 0:05:48so let's have a look on the data
... we have around two hundred there was a of french broadcast news that drawn from
... a previous french evaluation campaigns
... so it's a combination of a tv and radio data
... i'm of this two hundred hours we selected two shows a target
... 0:06:08cooperate we selected there's with l c be awful and the f m story
... and we to all other available recordings and decided to build what we call the
... train corpus
... so if we take a look of at the data we see that we have
... more than forty episodes
... 0:06:33more than forty results for each other show and we what we cannot this is
... a speech proportion of the what i call the recording speakers which is a above
... fifty percent for both corpora
... corpora
... so the recurring speakers is speaker who appear in more than one if results
... 0:06:51as opposed to the one time speaker who only appear in one it is
... so
... to the em so of the previous first question
... yes we have several speaker appearing in different if you that you know target
... so no
... 0:07:09we decided to
... train the original system
... meaning we suppose we know how to
... cluster on the data target that so we
... we use we had the target that are labels in real life we do not
... 0:07:26so we don't have those labels but for
... experiments
... we decided to use them
... so
... to train the ubm and the tv matrix and estimate the p l d a
... 0:07:37parameter parameters we process the same with them
... with their trained that are we just replace the train data with labels my target
... that are with labels
... so what we see detailed that is that for the l c p so we
... are able to obtain a result
... 0:07:52so the results are present in terms of a diarisation error rates
... cross recording there is there is there a residual error rate
... so for the l c p show we had some results as for the b
... f m shall we will not able to estimate the lda parameters
... and we suppose we don't have enough data to do so that we we're gonna
... 0:08:14investigate that
... if we compared with the baseline results we see that if we use the information
... about speakers in the target that we can right we should be able to improve
... the baseline system
... so what we one
... 0:08:35to investigate is
... it's the minimum
... amount of data we need to estimate p idea parameters because
... we so that for the v f m shall we will not able to train
... p lda while for the l c d so we were able to so
... 0:08:51we just decided to find out the minimum number of it is that's we could
... take into the l c p so to estimate suitable p lda parameters so that
... the group of that with you see here is the d right the a on
... the l c d so
... as a function of the numbers of it is it's take and to estimate the
... 0:09:15p l d a parameter so
... the total numbers of ap that is forty five and we started the experiments with
... thirty visits because we see that a before the results that
... so what's interesting
... interesting to see is that we need to run thirty seven results to be able
... 0:09:33to improve the baseline results
... and when we have
... thirty seven it is that's we have forty recording speakers
... what's also interesting to see is that
... we have the same numbers of speakers and here
... 0:09:52i don't the
... the different number of it is that's but the resulting the art is a really
... well seals and he also what's interesting is that we are able to
... so we have the same speaker out that
... what
... 0:10:10what's happening here is dressed that there are more and more that are gathered for
... each speaker
... and we need a minimum amount of that are for each speaker if we take
... a look at the average number of session task because it's a run seven
... when you have thirty seven types of
... 0:10:31as for the df m show
... when we take it is that we only have thirty five recording speakers
... and are bring in five it is that in average so it's far less than
... for the l c d corpus and that's why we are not able to train
... a dog parameters
... 0:10:50so now let's place in the real case and we are now not choose not
... allowed to use of that target data labels
... so i'm the first to train the ubm and tv matrix what we need a
... clean speech signal so we just decided to take the output of the speaker segmentation
... and compute the ubm in tv matrix
... 0:11:14but we don't have any information about the speaker so we are not able to
... estimate period of the lda parameters
... so we just replace the p lda likelihood scoring by focusing based growing
... and then we have a working system when we look at the results of our
... stand with then we using t lda
... 0:11:39that not to suppress the we expect that
... no what we obtain a speaker clusters so
... what we this idea is to use the speaker clusters and try to estimate the
... lda experiments with those clusters
... when we do when we do so well the training procedure doesn't six it
... 0:12:04well we so in the oracle experiment that the number of data was limited and
... we also suspect that the a probability of the clusters are used to back to
... allow us to estimate the lda permitted
... so to summarize with the self training experiment
... for the ubm and t v training we selected segments produced by speaker segmentation we
... 0:12:31only get the segments with the duration above ten seconds
... and we also it shows the bic parameters so that the segments are considered tool
... because to train a to estimate to train the tv matrix we need a clean
... and we only need we need only one speaker in each segments for training
... as for the lda we need several session
... 0:12:57the speaker from values results so first we perform an i-vector clustering based you got
... a position and use the and put into a speaker clusters to perform i-vector normalization
... can estimate ple are limited so we just select
... the output speaker clusters with
... i-vectors coming from one
... 0:13:18more than three episodes
... no so we so that we are not able to train a
... sufficient system with only detected target that are so we decide to at some train
... data in the mix
... so it's the so the classics the idea of a domain adaptation
... 0:13:41so the main difference in this e system comparing with the baseline is that we
... replace the ubm and tv metrics by
... in this experiment ubm and tv metrics are trained
... on to a target that are instead of training data and then we extract i-vectors
... from the training data and estimate the lda parameters on the training but
... 0:14:05so
... when replacing the ubm and tv matrix we are able to improve around one percent
... in absolute
... in terms of d r
... no
... 0:14:20well why not try to applied the same process then we it with the center
... in experiments and take the speaker clusters to estimate a new p lda parameters
... so as before we the training the estimation of the lda parameter phase we i
... think we really don't have enough that do so
... and so we just decided to
... 0:14:43combined their use of training data and
... target the task to update the key idea parameter the classic domain adaptation scenario but
... we don't use any whiting parameters to balance the influence and of trained and target
... that are we just
... to the i-vectors from the training data and the i-vectors from this
... 0:15:07output speaker clusters and
... combining them and
... train new p lda parameters
... so when we combine the that the data we again a improve the baseline the
... system and again one percent in terms around one percent
... 0:15:23in terms of the whole
... and
... well now that we've done then we why not try to iterate as
... as long as we obtain speaker clusters we can always to use them and try
... to improve the estimation of purely a parameters
... 0:15:43well it doesn't so it doesn't work
... if you iterate it doesn't improve the system we tried two
... four iterations but i
... that it's not okay
... so
... 0:16:00let's have a look on the system parameters we use the site it for that
... or position toolkit it's a package above the psychic toolkit
... but library
... for the front end and we use thirteen mfccs with delta and delta-delta
... we use a two hundred and fifty six components to train the ubm
... 0:16:24the covariance make matrix is there gonna
... the dimension of the tv matrix is two hundred the dimension to be the eigenvoice
... matrix is one hundred
... we don't use any i can channel matrix
... for the speaker clustering task we use
... 0:16:42the combination of connected components clustering and the article argumentative clustering
... and i as i said before the metric is the data results for an error
... rates and we use the two hundred and fifty milliseconds
... so
... if we summarize we compare the other three for different system first three but we
... 0:17:08performed a surprise training using only external data
... and then we
... use the same training process but we replace the training data with their delicate that
... this is the oracle experiments
... and then we focused on
... 0:17:24and surprise training using only the target data and we so that that's it's
... that's good enough when comparing with the baseline system
... so we decided to take back
... some training data i'm applied in some kind of unsupervised domain adaptation and combined train
... target
... 0:17:46so
... to conclude can say that
... with so that if we don't have enough data we absolutely need to use external
... that bootstrap the system
... but the putting it even using unlabeled target that a which is and perfectly clusters
... 0:18:04with some kind of them domain adaptation we are able to improve the system
... so in our future work we want to in to focus on the adaptation framework
... and used
... already
... where we we'd like to use
... 0:18:23introduce whitening variability between train and target data
... and we also like to try to work on the iterative procedure because we think
... that if we are able to a better estimate p lda parameters after one at
... a rate iteration we should be able to improve the quality of clusters and some
... kind of iteration should be possible
... 0:18:46in fact this work was don't already we presented a we submitted a paper at
... interspeech it will be presented
... so i can already said that using one thing variability
... the results are really get better
... and the iterative procedure also walks we with two or three iterations we are able
... 0:19:11to slowly improve the that the all
... and another way of improve
... improve your remains to be seen but
... with what's like to try to put strapless that would any label that for example
... we could try to take the train that a don't use the labels and upper
... 0:19:31from causing basis clustering because we so that on our approach maybe we didn't have
... enough data and the target that i to apply this idea so maybe
... try to bootstrap with more unlabeled data could be working
... well thank you that that's wonderful
... documents so i'm for instance
... 0:20:06thank you for that are i think this is more common that a question but
... i believe that some of your problems with the em for the p o da
... our years speaker subspace dimension is higher numbers
... i think that that's the problem we the that i mentioned that for a t
... v and p l of the idea is to find a when we don't have
... 0:20:29enough target data i cannot the problem is
... i is difficult to estimate the one hundred i mentioned
... p l d l parameters if you don't have that much speakers
... did you try to reduce the i don't i do the focus on that well
... thanks to the presentation thirteen and well like to use it for d c two
... 0:21:01sounds pretty
... and you was presenting it on
... i think that last used e
... i use the deeper then how the school that
... well
... 0:21:16in my experiment
... the results are not very different between ilp and agglomerative clustering well i just decided
... to use agglomerative clustering because it's
... small simple simpler
... yes computed computation time
... 0:21:37but not really a big difference between
... i think
... so
... dealing with these different internal extra so one thing i
... see here and work was
... 0:21:59what to use a way that i
... why each latest specifically a little white here
... no we didn't fight the data are we just we just to the target clusters
... and the training clusters and
... put them together in the same dataset
... 0:22:20so if you look at the equations its own
... it's the same taste as if use that the whiting parameters
... of a value which is the relative amount of data between target and try to
... train better so it is almost equal to zero
... that's why we need to work and the availability because we are not
... 0:22:50would every for that i
... not that this difference anyway you're clustering experiments you decide how many clusters
... well the
... the clustering is a function of the that's which
... and we don't we just saw a select the screenshot by next experiment we that's
... 0:23:25why we which was to target corporate because this way we are able to do
... an exhaustive search on the other three shown on the one and one corpus and
... then
... we you look if the same crucial applies for the other cultures
... and the clustering tree structure is around zero so
... 0:24:01we still have time for a few questions
... okay so i was curious human centred in this work to you don't want be
... considered for the reader assumed to be helpful but then you are able to somehow
... fixed upon the
... a next once we know what is that
... 0:24:20i mean what was to what do you think is the most the problem would
... do so
... in this in this work the program is we want to introduce a wide thing
... we don't balance the influence training of target that also
... and the combination of training and target that we have so much training data
... 0:24:39that the
... the whitening parameters is really in favour of the train on the training data
... when we change the are balance between training target that and give more importance to
... the target that the films to get better results and then you see that why
... the routine you can improve some
... 0:25:02no more of the two or three iterations
... and that we also i did some kind of yes cost normalization because when you
... when you when you use a target that too
... to obtain the p l d a parameter as the distribution of lda also tends
... to achieve a lot
... 0:25:24for you need to one
... normalized to keep the same clustering speech
... otherwise you don't cluster
... the same place a total
... after reported average
... 0:25:40okay so if no further questions let's thank the speaker
Related Recordings
Deep complementary features for speaker identification in TV broadcast data
Mateusz Budnik, Ali Khodabakhsh, Laurent Besacier, Cenk Demiroglu
Soft VAD in Factor Analysis Based Speaker Segmentation of Broadcast News
Brecht Desplanques, Kris Demuynck, Jean-Pierre Martens






































