0:00:13um
0:00:14good morning everybody
0:00:16so i'm very pleased
0:00:17and to present jones work
0:00:20with
0:00:21one eight by a data
0:00:24B
0:00:25there
0:00:26and we are over from integer telecom as the previous speaker but we are located in brittany
0:00:32to compute uh telecom about that
0:00:34so um
0:00:36the topic is uh is uh blind source separation
0:00:40but in the under determined that the mind case
0:00:43and something that i want to emphasise from the onset of the presentation
0:00:47is the fact
0:00:48is that as a result
0:00:50uh strongly relies
0:00:51to show you realise
0:00:53on a sparseness smoother
0:00:54that was introduced several years ago
0:00:57i will come back to this as fast as more that i recommended further
0:01:01because uh uh of its are real events in many signal up
0:01:04a processing applications
0:01:06beyond on the presents one
0:01:09so
0:01:11um
0:01:12i i so the problem states as follows
0:01:15we can see that as and an instantaneous mixing case
0:01:18well we have
0:01:19a known number of unknown sources
0:01:23yeah mixed
0:01:24through matrix a
0:01:26uh for the second of shortening the presentation
0:01:29uh i suppose here that to make is that matrix a is known but in the paper the case where
0:01:35the matrix a is a known as discussed
0:01:38and the resulting channels
0:01:41are corrupted by independent and additive white gaussian noise
0:01:45and we have a set of sensors
0:01:47the number of sensors is assumed to be less strictly less that the number of sources
0:01:52yeah for the estimation
0:01:54of the sources on the basis of the observations
0:01:57is an ill posed problem that detector by considering my assuming that the sources have sparse time-frequency representation
0:02:05and that's in continuation of several papers
0:02:08that are given here
0:02:10so
0:02:11the sparse
0:02:12and
0:02:13uh to the sparseness smooth
0:02:15so uh consider a spectrogram
0:02:17oh well i'm sure
0:02:19in the mixture
0:02:20and you notice that many questions
0:02:22are small
0:02:23and in presence of noise
0:02:25um
0:02:26the the these small uh uh signal components are done by noise
0:02:31and only only a remain visible or the signal components that are a big enough
0:02:37and uh you can resume namely uh consider that the proportion of these large
0:02:42a signal components uh remains lies than or equal to one half
0:02:47so such remarks
0:02:48have already been made by several floors
0:02:51especially in speech audio coding
0:02:54and so we can formalise
0:02:56uh is is
0:02:57as is remark
0:02:58by uh by uh using results or i mean i put is is
0:03:03a publishing two thousand two
0:03:05uh in the paper dedicated to binary hypothesis testing
0:03:08but here as is i put is is used in this paper
0:03:12uh uh can read as follows for the problem
0:03:16so we assume that's true signal components
0:03:18i is uh present a since in the transform domain
0:03:21here we can see those if we domain in some other peoples well considered the wavelet domain of course
0:03:26and with a and we assume that the probability of presence of a signal component
0:03:31is
0:03:32less than or equal to one hand
0:03:35just like what is this is
0:03:36that's when present
0:03:38the signal components are relatively be
0:03:40in the sense that amplitude
0:03:42remains
0:03:43but but some minimum amplitude role
0:03:47so i just stage i must to make to remember
0:03:49first
0:03:50i T disease
0:03:52can be a regarded as constraints
0:03:54is that actually bound our lack of prior knowledge on the signal or uh signal distribution
0:04:00that's to be important because in this paper but in also
0:04:03in all those are papers based on such a a work
0:04:06uh we
0:04:07we assume that the signal distribution he's pretty uh no
0:04:12um
0:04:13Z is i these we say is as if form two weeks
0:04:16weeks sparse smooth or
0:04:18and as that was suggested by one of my for but P D student
0:04:22and we use this terminology now in order to to and this to to make this distinction between the notion
0:04:28of sparsity used for instance in compressed sensing
0:04:31because in compressed sensing you assume that you are you have a a sequence of coefficients that represents your signal
0:04:37but most of these coefficients are a a small or all the zero and the only if you of these
0:04:41coefficients are actually a non zero or large here we do not restrict our attention to search to choose proportion
0:04:49of a signal components
0:04:50is that are small
0:04:52but we are to the contrary
0:04:54we uh propose a framework
0:04:56where is this proportion of a signal component
0:04:59can be close to one half
0:05:01i know that to stick
0:05:02to as if you six
0:05:04uh a
0:05:05i as uh put uh right before
0:05:08so
0:05:09no i'm going to a a two D tried to several states
0:05:13of a a little them based on this past that smaller
0:05:16before presenting some experiment or results
0:05:19and completing the talk
0:05:22so uh i mean some i put use is concerning to as a as a a as the the the
0:05:27blind source uh a suppression problem so the first start with is is that our mixing matrix they
0:05:33is has has for right
0:05:36the second i put use use is that at any time-frequency point the number also sees of active sources
0:05:43uh is strictly less than the number of since
0:05:45that's a two should i put this is because without this i put is is one step of what or
0:05:50our goal them actually phase and i would be important that
0:05:54um
0:05:55and so case i
0:05:58so is the the pros you we we propose here is an extension of what proposed
0:06:05uh what was proposed by i that a B and also over all although of course
0:06:09in in and two thousand seven so
0:06:12and we begin by computing the short time fourier transform was it mixtures
0:06:16in order to get our a sparse representation
0:06:19all of as the noisy uh chose
0:06:22and then is the key point is we estimate the no standard deviation
0:06:27uh we need this estimate because in the uh it we need these estimates
0:06:32this estimate in the next steps
0:06:34and the he point to the main contribution here is that we make this estimation
0:06:40via a a a a a a completely new algorithm which since it has been published in march two thousand
0:06:45eleven
0:06:46so it's called and C you see
0:06:49and a is said with um
0:06:51uh in this paper has been applied to to was a problem for instance
0:06:55uh this problem was the detection of non that you've communication systems in electronic warfare
0:07:01and this this algorithm relies on this they were to call result the based in two and eight
0:07:07and i don't want to get you bow down into the mathematical details concerning this the rain or a as
0:07:13a a and C is it's said
0:07:15but are just want to outline let's as a main principles on which died was them is based
0:07:20and for this i need this random variable
0:07:23so in this run um by board it's K
0:07:26is a is a short fourier transform of the signal or received at since L number T
0:07:33uh
0:07:34two is as she's sort of as the rule is the noise standard deviation
0:07:38and um is the limit the went as S
0:07:41the following
0:07:42first
0:07:44and don't the weeks sparseness smaller like presented before
0:07:47is this random them by gabor
0:07:49tends
0:07:50i uh with respect to a very specific and quite into case
0:07:53convergence criterion
0:07:55two this quantity
0:07:57when the
0:07:58signal to choose a large not so when the signal to noise ratio is good enough
0:08:03when the number of pairs T F that i used to compute this
0:08:07random variable
0:08:08is large enough that said
0:08:10and when the or sort two is chosen according to the meaning of amplitude tool of the O our signal
0:08:15or
0:08:16and the this she's not really a constraint because our meany us we sort that got
0:08:20satisfies the the required condition
0:08:23just said on result
0:08:25a given by system read
0:08:26is that
0:08:27a my the rule is actually so
0:08:29unique positive real number that satisfies this type of coverage
0:08:34shows the you C
0:08:36a that from as follows
0:08:37this is an asymptotic results so we we uh the N C is she is based on the disk straight
0:08:43district cost
0:08:44uh we intend to to uh we we try to minimize this district cost
0:08:49and the pose is do you read by you is that to minimize is this cost is considered as a
0:08:55the solution of this a question
0:08:57and is that
0:08:58uh uh an estimate of the no standard edition that's
0:09:01that it's for a a for this small presentation of the set was a
0:09:05because of the word
0:09:06i i i i i we run out of the
0:09:09so once we have the no standard deviation we can discount reject
0:09:14the time-frequency points
0:09:15um is that correspond to upset of you to noise a or the hard to but we we all week
0:09:23uh noisy signals
0:09:24and we yeah from this rejection biased on there uh resulting sorting taste
0:09:29is that guarantees
0:09:30uh specified five for some probability
0:09:35then we estimate a short time fourier transform of the sue sees i'd execute points
0:09:39and we begin by identifying these active is the active source
0:09:43and C is performed by means of a star now a noise
0:09:48noise so
0:09:49sis space approach
0:09:50uh so briefly
0:09:52G is a set of that they is uh between one and and but we uh i assume that the
0:09:59cardinality of G
0:10:00is cheap be less and and
0:10:03uh we take
0:10:04in a in the mixing matrix a a the cologne
0:10:08uh we was number
0:10:10ease
0:10:10in J and we form matrix a
0:10:13uh and the X G
0:10:15and uh if
0:10:17J
0:10:18uh is just set it as a a a of the source and X sees that are actually present
0:10:24at a time-frequency point
0:10:26C S is then the projection
0:10:28uh the projection of the observation
0:10:31oh onto the noise uh subspace
0:10:34should be uh should be mean me in unionised
0:10:37so um we we proceed like these two i i don't E five
0:10:42uh as a source is and that's at all
0:10:45so to identification of the two sources
0:10:47we
0:10:48do noise the sources
0:10:50by a on linear a feature where seem my the rule is used
0:10:53to address to the future so as a estimate here is used is a pair from here used here and
0:10:59use you know as way
0:11:01and
0:11:02after we just have to compute the inverse time for you transform
0:11:06to estimate the sources in the time domain
0:11:09okay
0:11:09so you we um compare a if we put in red uh those so here we put in a red
0:11:16the the contributions
0:11:18all this work with respect to uh uh uh i side base it work in two thousand seven
0:11:24and i want to and five size here that in needs work
0:11:28in this work
0:11:29the no standard deviation was
0:11:30i seem to be no
0:11:32so we estimate a
0:11:33and uh is this estimate is very helpful to reject the time-frequency points that are
0:11:38uh
0:11:39use less
0:11:40for uh
0:11:41for uh as source suppression
0:11:44and well i um is the a paper by i side B
0:11:48and uh uh uh a put the proposed and is not where as is the the rejection
0:11:53well as uh are formed by us use sorting test where this resource
0:11:58where a manually
0:11:59uh true
0:12:00i'm period features and for every signal-to-noise to we show uh under consideration all of interest
0:12:06you was we get to read a of all this as we replace all of these all these parameters about
0:12:11only one parameter the for on probability
0:12:14and uh based on the estimate at
0:12:22that is so
0:12:23okay
0:12:24so we use uh this
0:12:27but most of automatic approach based on only one part as a four sample but it do we do not
0:12:32we we we we expect no that for me as well as uh is a the may th that would
0:12:37you know make stored uh in in two thousand seven but we do you expect
0:12:41uh to perform for quite as well
0:12:44um and in fact that's that's to here in between you have to the normalized mean square error will um
0:12:50obtained by using a as a or i with them and you read you ha as the result
0:12:56obtained by using a our uh uh new might go was it so as a results are quite the same
0:13:01but i repeat the you are there is only one parameter each which is a for so um well but
0:13:05but fixed at ten or minus tree
0:13:08and so now we have a as a on uh as a a as a the blue black
0:13:12and this is um is aimed at that it teens to the the the the is the difference between our
0:13:18as a noise estimator or based on that robust estimate
0:13:21yeah we have replaced the and C see by the mad estimate of
0:13:26the my to estimate or is a uh a with this use of the might to meet or you you
0:13:31know that there is a significant loss of that from
0:13:33it that's not surprising fact
0:13:35the mad estimate or is a robust estimator and the sense
0:13:39that
0:13:39well and there are oh only a few i would like
0:13:43in the main goal data a them as a my estimate a can estimate and noise standard deviation
0:13:48but here i'll the how to a a a a two D signals
0:13:52and we have seen in the spectrogram that signals component
0:13:55that a signal components are quite
0:13:57present
0:13:58and self for the map estimate a face in this case
0:14:02to the contrary our as a mid and C V C is design
0:14:06uh uh is based on the there were to go from
0:14:09which is aimed at coping with situation where outliers or signal you know
0:14:14ignores are relate to be present
0:14:15and that's a fact that and C V C uh out to a forms of them mad is the is
0:14:21a tuned estimate or the winter alright system of
0:14:23for the same reason
0:14:25but it is it in fact a proposed is the is like to differ
0:14:29okay
0:14:29no and so now i would like to to
0:14:31so i
0:14:32to uh uh to to to some uh and for more name so i don't know whether it's what
0:14:38that's why
0:14:41oh yeah
0:14:46well
0:14:49uh had
0:14:51should had me
0:14:53uh
0:14:54where yeah
0:14:58a a yes i just on i don't have a a a a a low these the fight
0:15:01oh
0:15:02that's a reason okay
0:15:03i completely people tend to a of the fight
0:15:06i so we i was very uh very happy with it
0:15:10uh yes i and if you uh should yes but it will be difficult for you
0:15:14to find and so i'm there so where was very happy with this
0:15:17but uh
0:15:18uh to but
0:15:19um
0:15:21okay
0:15:22so i i i i S this
0:15:23a for at at if people what i and to see that i can so as it is is a
0:15:27these uh
0:15:28i i can prove provides a is a listening things on my on my laptop uh a later
0:15:33so i i i go to the to the competing out
0:15:36so i have fess i the role of sparseness here
0:15:39uh in the presentation
0:15:40as
0:15:41a true the use of the L C S C
0:15:44uh i i also in for size or the fact that
0:15:46uh by using this past mess more we have only one part to to fix
0:15:51and i also would like to and size of fact that we uh as a as this algorithm doesn't take
0:15:56it into account any prior knowledge on the exact nature of source
0:15:59haven't use the fact that the signals out would you once
0:16:02we just
0:16:03i uh uh uh man
0:16:05is that these this north have a sparse time-frequency representation
0:16:08so this kind of a as and can be used
0:16:11for all the types of signals for them for instance like the are and signals
0:16:16um the a use
0:16:18the C a yes C is some to are in the set voice
0:16:21but see it has a very uh uh we but to a back a very uh a a very important
0:16:25to by
0:16:26the to use a is a computationally
0:16:29a has a very high computational or
0:16:31but can be you dash cost
0:16:33and um
0:16:34uh a you want to go we can that
0:16:36with a uh not a new algorithm voice to meetings to not standard deviation
0:16:40it's the date
0:16:41for a a you motion or or uh on P to treat estimate or and this uh i was an
0:16:47should be published in the coming months because well assume it is a
0:16:50the the revised version a a a a few weeks ago
0:16:53and is the date
0:16:54uh relies on i mean even more complete complete
0:16:58uh them to come back grounds that is that the and C E S C
0:17:02it performs as well as a as
0:17:04is a and C C and the ball or its computational cost is significantly less or is that of it
0:17:10a a and C is so that we are going to use these day
0:17:14so no and would like night
0:17:16to be full automatic and would like to get rid of the for um uh probably that we fix
0:17:21i
0:17:22and this is possible or we have to write this for this
0:17:25uh but one of the most for my is is the fact that the date is it set by construction
0:17:30and now to i a detector but and play a detect all capable of coping with quite a a large
0:17:35proportion of uh signal company
0:17:38so uh we i think that i think it just uh a speaker at but i think it's possible to
0:17:44pair form
0:17:45yeah i by as the date
0:17:46and as a a a is the estimation
0:17:49and the detection at at once
0:17:51at the same
0:17:53no um we have considered as the instantaneous case
0:17:57now we have to deal with a convolutive mixing case
0:18:00which is a bit more realistic
0:18:03uh i record that the are discussed the case where a is known and i the that in the paper
0:18:08but we we tackle a problem where a E
0:18:11and
0:18:11okay is this concludes uh my presentation
0:18:14and i thank you very much for a tension
0:18:30i guess that where uh for mixtures and a a not so extreme it channels and for source sources
0:18:35okay okay but
0:18:37sorry
0:18:38think your imaging