0:00:16uh so
0:00:18he's is like a
0:00:19for that that line
0:00:21a white per some like to introduce the problem
0:00:23and then explain at what the ancient base it that hard thresholding to them
0:00:28then it's use the medical experiments and this might don't
0:00:33uh one
0:00:33in compressive sensing basically what we want is to uh sample S sparse signals they if you the your measurements
0:00:39and we construct the signal from the measurements Y
0:00:45however compressive sensing system are not always immune to noise we always had noise in the measurement him and
0:00:50a a or that was you showed uh part
0:00:53uh most of compressive sensing says team systems uh well combine all the noise but you you change yeah i'd
0:01:00a noise in the
0:01:01signal domain or in the them of and the line you can combine it
0:01:05uh here just as a single time
0:01:07so you can just model the system us being uh your clean measurements
0:01:13on some noise term or
0:01:16however uh what happens to the measurements are not
0:01:19yeah are corrupted by sparse of uh impulsive noise
0:01:58oh okay
0:01:59so what happens if the
0:02:01and measurements are corrupted by impulsive or sparse
0:02:04sparse a corrupted noise
0:02:06are most additional compresses just in systems on only assume a gaussian noise or bound the noise contribution
0:02:13a few papers that i'd is this problem
0:02:15uh well really when you have impulsive noise just noise has infinite variance for very large audience
0:02:20at the really so it just breaks all the the uh they got the theoretical guarantees of the least squares
0:02:27i agree that
0:02:28a here are some motivation here we have an example of a sparse signal and we take
0:02:33a if run on on measurements
0:02:35uh here is the same as sure a and i only up with one because a like all the measurements
0:02:42here are like the construction
0:02:43this is the
0:02:44to get sparse signal reconstructed from the clean measurements sure using on P
0:02:49but here using in a game that the same algorithm we can we are a it from the the measurements
0:02:54you can see that the effect of only one of liar is pressed well all the the components
0:03:00so it breaks all the some shows so really the mode duration of this work is to develop a simple
0:03:05and robust algorithm that are capable
0:03:07of of there a faithful reconstruction from these uh
0:03:10corrupted measurements
0:03:12a what is the solution what but we take a week um a
0:03:17to the rows estimation of the theory to the rescue
0:03:19so we find is uh
0:03:21log L norms
0:03:23is just basically a concave
0:03:24a a function is not really an on what is
0:03:26i we change the lease to make use of the L two make we change it like these of B
0:03:31and actually we focus on the particular case
0:03:34um when P E was to
0:03:36which is that low range and norm cases very well known in the image processing community
0:03:40a a this uh the range and has some cops sort
0:03:43some uh a good at that just the first one is that is and everywhere continuous function and it's everywhere
0:03:50uh the second is that it's convex
0:03:52near near region so four points that are uh
0:03:55that's a we dean gamma
0:03:57or within in this uh a more uh it's it looks like an L two more
0:04:01and the third and most important one is that a large deviation are no cable be penalized by this not
0:04:06much "'cause" you from this behavior
0:04:08a a large deviation or
0:04:10the lot
0:04:11concavity just makes
0:04:12that the these norm is not have we penalise
0:04:15so we use this norm
0:04:16in a previous work
0:04:18uh we modified the basis pursuit algorithm so instead of
0:04:22minimizing the L one norm subject to a L two constrain
0:04:26what we need is
0:04:27find uh the signal with a one L one norm we other range and constraint on the fidelity they are
0:04:34are we have this
0:04:35reconstruction T is
0:04:36however we have a problem with this as he was very robust to noise
0:04:40but it was really a slow and complex uh to solve
0:04:44and for large uh signals or for a large uh problems it was impossible to small with the memory what's
0:04:51so uh we will now come up with a uh or at different them based algorithm
0:04:57so we start with these idea L optimization problem
0:05:01really the in our we fine
0:05:02a a at the sparse vector X or an is a sparse vector X
0:05:07that minimize is these objective function which is are and a lot H channel uh fidelity constraint
0:05:14instead of doing these which is i name this an B and M P problem or a combinatoric problem
0:05:19a a we use to adaptive algorithm
0:05:22our basic an iterative hard thresholding algorithm
0:05:25this algorithm just goes
0:05:27out what G is the gradient of the direction function new
0:05:30is thus step
0:05:31that is or a us so step that is changing
0:05:34it with the time
0:05:35and we it go changing here you can think of that as a
0:05:39gradient projection algorithm or as a latin whatever
0:05:43a a projection algorithm to go we tentatively in it and in the opposite direction of the gradient and then
0:05:48you project you your solution onto the subspace space of a sparse signal
0:05:52which is basically these
0:05:53operator H is is doing which is the hard thresholding operator that gives the S O largest components of your
0:06:01and set the other wants to see you
0:06:04he just like a
0:06:06that's a
0:06:07and to we should of what they wouldn't does a degree you know the and function can be expressed
0:06:13as these weight it
0:06:14um being are function on where why minus V times X the
0:06:19is the error
0:06:20L at iteration T
0:06:22and this matrix W
0:06:24is defined and it's a diagonal matrix where each component of the diagonal is just is gonna square over a
0:06:30gamma square
0:06:31a the error or a squared
0:06:33he is like a lot of these uh sort of weight so what does we is that a whatever is
0:06:39um gamma here is are and example we down might was one so what it since i got we trust
0:06:45all the issue ornaments
0:06:47that are
0:06:47or when there are
0:06:49is a within a distance of a gamma of the two of the two uh a signal or a parent
0:06:54that true signal
0:06:55and the other ones
0:06:56we don't trust than that much so we
0:06:58give a that's a a a little weight
0:07:00to those me short in as you can look at those may short sir what i be the ones that
0:07:03are both highly are corrupted
0:07:06so we end up with these uh the range and uh it hard thresholding algorithm
0:07:12which is but basically uh the same almost the same algorithm as at least squares based algorithm the only difference
0:07:17really is a we are adding a multiplication by a diagonal and matrix so in terms of computational load
0:07:24it's all the same as the at the part of the algorithm
0:07:27but now the algorithm used a a was against an impulsive noise
0:07:34a here here well each oh how some um
0:07:39that's a a gone and use in terms of the restricted isometry property
0:07:42uh it's a set there was it that some property is just of the condition there was a right B
0:07:46is at exactly the same as the a these squares it had a T part of than algorithm and we
0:07:52get to these reconstruction bound
0:07:54the first and in the ever bound is a
0:07:58an error that depends on actually the
0:08:00the norm are in X
0:08:02and it goes to zero
0:08:03as T goes uh to infinity
0:08:06and the second one
0:08:07is the noise model
0:08:09this and so you don't is now we have a constraint or a duration constraint
0:08:13on the noise instead of having an L two constraint
0:08:16we just but uh come under ancient all constraint
0:08:20on the on the noise and we get these uh exponential
0:08:23uh bar
0:08:27yeah here are some crucial a site mentioning uh to lights are go uh the selection of got my to
0:08:33go here why because if got is too large
0:08:36then a you are not going to reject too much of liar
0:08:40if got my is
0:08:41to a small
0:08:42you are going to reject on most everything
0:08:44in your uh in your measurements
0:08:47we don't have like our theoretical right and D for all the proper solution of proposed image for gamma however
0:08:52uh it because we have seen that this estimator based on one tile
0:08:56a has work uh fine is just the at points
0:09:00spite a one time mine as uh the twelve point five can white what fun time
0:09:04and basically with sending this
0:09:07gamma we are considered that twenty five percent of them assure men's are corrupted or we don't trust that twenty
0:09:12five percent
0:09:13well we draws the reminding uh seventy five percent of the measurements
0:09:19now i know there a like is that is the selection of the step size um you at each iteration
0:09:25uh a the optimal want is to select them use that
0:09:29that uh
0:09:30that a a T the maximal that buttons
0:09:33in the opposite direction however that's the doesn't have a close solution
0:09:37but we select that like solving this problem is our reweighted these a square problem for mean the matrix W
0:09:42and then having
0:09:43the this is problem it has now a close form solution is easily
0:09:47it can T be easily calculated
0:09:49and uh do with this is that
0:09:51and with this new we got and T
0:09:53that uh the lower tension
0:09:55oh fidelity to turn
0:09:57is is is more or or at least equal to a previous iterations so we are going in and i
0:10:01know this end and direction
0:10:05here as experimental setup for what we have
0:10:09here the first uh experiment this experiment is performed using contaminated a gaussian noise where we have a gaussian noise
0:10:15plus on liars
0:10:17and the noise
0:10:18here uh use the contamination factor it which just like the percentage of uh a liar in the noise that
0:10:24comes from ten to the minus three oh to two a fifty percent
0:10:30this i and shows the performance of the these these of based iht algorithm and of course when we have
0:10:37a all night
0:10:38the performance just draw
0:10:40are the
0:10:41right uh red one is the performance of the weighted median regression uh a goody than use uh a very
0:10:47was is based on the L one on but as the noise gets more to
0:10:51the perform the case also
0:10:53he the performance of a a it at a different racial algorithm with two choices of gamma
0:10:59uh the that one
0:11:00is a using they got a that it just playing based and the uh order statistics
0:11:06and the blue one
0:11:08a a knowing a priori the range of the clean a short and so it you uh know a priori
0:11:14the range of the clean a short as you just set yeah a was that i mean
0:11:17has of course the one the the better uh performance
0:11:21however the performance of our
0:11:23uh are estimated gamma the still is good and it's close to the other one without knowing anything or any
0:11:29having any prior information of the clean
0:11:31um a
0:11:33here are some sample with up a stable noise again the curve of the same few the performs of i
0:11:38is this a square
0:11:39based uh i i used D
0:11:41a a is more i'm i'll for uh one think when i'll five gets close to see that we have
0:11:46a more impulsive environment
0:11:48when a flight was one we have the count chi distribution and with all white was to we have the
0:11:53gaussian distribution which is the class go like a
0:11:55uh distribution
0:11:57so um of course and you but the performance as a a very good
0:12:01and for the more than is not only rub boss
0:12:03when we have uh
0:12:05impulsive noise but is also wrote most
0:12:07when we have a gaussian noise
0:12:09so the but think about this nist is for both bold in light and heavy tail
0:12:13an environment
0:12:16a here is an example of a
0:12:18corrupted up to a measurement without the stable noise but now of and the number of me short a or
0:12:23the number of samples
0:12:24we see here
0:12:26this plot L the green one
0:12:27is with a five people what's your point five which is a very high
0:12:31impulsive environment
0:12:33and we see that of course uh we need more samples what to compensate for the noisy um a sure
0:12:38elements uh when i'll like was to
0:12:40we have the gaussian case and was you know the performance of the low range base algorithm
0:12:46it's all the same as the performance of the leases quite a based algorithm which is optimal for the gaussian
0:12:50noise so we don't wear not losing out too much
0:12:53and an option vitamin
0:12:55how have a final example here are we down any mention shall they not this is a two fifty six
0:13:00like to P to six image
0:13:02uh we take some random how to mark me a actually a thirty two thousand
0:13:06how a run or how to manage instrument and we it
0:13:09some a couch in noise to them a so here is the
0:13:11the the the of bottom one is the corrupted men
0:13:15uh we perform a construction of core what that with the lease court base
0:13:20i it at a different temperature should go to that of course that are structure is not really good
0:13:24what about
0:13:25a here we use a cleaning
0:13:27algorithm just leading the measurements being all the liars before reconstructing
0:13:33a a the still the performance is something that that "'cause" we're losing information
0:13:38and here
0:13:39is the
0:13:40with the same a sure and the record be that the recovery of the tension in to par thresholding algorithm
0:13:45he just to for comparison i plot of the recovery
0:13:48of the with the least least squares
0:13:50are you used at algorithm but in the noise this case
0:13:55oh okay so that's let me conclude now a a we have presented a a simple and brought most uh
0:14:00it that if a to than but rock was again
0:14:02i'm impulsive noise
0:14:04um the performance some properties are studied
0:14:07and we see that it's uh
0:14:09rebels against heavy-tailed noise but also in like pale
0:14:12uh environments
0:14:13uh one of the future work is to
0:14:16a well
0:14:17leverage in the prior information on to this are in prior information like prior or information or bought of based
0:14:23compressive sensing like a to people in rice are working "'cause" these algorithm is not suitable for all those more
0:14:29is not only the hard it or deep hard the that hard thresholding algorithm
0:14:33uh and thank you very much
0:14:43so question
0:14:50uh when when you assume the noise is imposed to be people are to this previously
0:14:56uh because that means that the the noise itself self sparse source and you can simply so so i is
0:15:01so i a fast to put two
0:15:03at least squares i H T you will meant to
0:15:06five with the identity density and so put the the the
0:15:09estimate you noise as well as you a sparse coefficients
0:15:12i would've thought that
0:15:13this would be uh a much fairer comparison of you have be looked that so
0:15:17yes i've of of a look done uh those a those than i haven't the don
0:15:22no comprise but using images what have done
0:15:26like our for this
0:15:27impulsive noise
0:15:28in this characteristics for the contaminated gaussian on measurements
0:15:31the performance is
0:15:33uh actually on the same is a is just the breakdown point is actually in how sparse
0:15:38you're your ms short of the the the corrupted the are of course it yeah
0:15:43you you have uh less of liars in mature immense your recall already
0:15:47is gonna be better
0:15:48but as you go
0:15:50a a a a a for their i mean more than a fifty percent of the measurements are corrupt
0:15:55again you well your performance gonna drop could you to have like in of measurements to recover
0:15:59that is sparse signal which is like the same behavior a we have uh the breakdown point here the percent
0:16:05comes not just for that breakdown point for the
0:16:08algorithm go the what in the in your chance it's uh the performance
0:16:12are are also by go what very
0:16:14similar to to to this one only problem with that that it this
0:16:18could be like a a
0:16:19tentative a gordon's people so
0:16:20first like
0:16:21find the sparse signal at then the
0:16:24the core of the short men's on it or rate or some people just sold it
0:16:28just one L one problem and fine a a lot of them
0:16:31but yeah
0:16:32don't compare some put the object you to me you
0:16:35the you assume you could do the same thing with a the actually is not it's not the uh no
0:16:42such as strict model is a well yeah yeah it's uh so you could do any it's again is plus
0:16:46and he's but it's also that i and and and just put the uh all the men's the
0:16:51uh sensing again the the coefficients with
0:16:54impossible is much
0:16:56yes the this sys
0:17:06of of the the and almost particularly very in
0:17:09chill of is it's community and the use it very extensively
0:17:12at least still
0:17:13and to use so
0:17:15did you make
0:17:16do can persons also
0:17:17the student
0:17:19but it's very well known that it's good for me
0:17:21impulsive noise
0:17:22yes sir of physics and for sparse signals
0:17:25so in that put it in the framework of K
0:17:27press an of course
0:17:32i i
0:17:34well i have a look at other literature from the your P six
0:17:37oh part
0:17:39really a we have only like
0:17:41preview wars in our room with the lower inch and norm in the compressive sensing variable like you
0:17:47in the
0:17:48well not only for the
0:17:50no is
0:17:51part but also for the that's uh that's so you sparsity encouraging
0:17:55um one
0:17:56but that's so the own thing "'cause" what i to be only that have only have a look at the
0:18:00image processing literature not but you physics literature
0:18:07i Q
0:18:16uh uh i have two questions
0:18:18uh is the first questions for these
0:18:20for in this experiment
0:18:22a uh you um you shoes the the the power of the the uh the noise
0:18:28but the we i don't know the
0:18:30the power of them as a men's is uh
0:18:33as is the um
0:18:35is being or was be a is a is a big as the the noise more
0:18:39or or or smaller than the noise
0:18:41paul right i mean this
0:18:42as a of the measurements even the scale of the measurements are are yeah yeah it's just this is more
0:18:47done done done than the north
0:18:49okay yeah for this case
0:18:51and second one is the four
0:18:53the um
0:18:54uh and the principle of the intrusion
0:18:56the um your your or on algorithm
0:19:00there is yeah it's here
0:19:02uh uh for for W and the big W
0:19:05a is uh
0:19:07as in
0:19:07it's um a little bit
0:19:09similar to the algorithm you three to you really use do have reason
0:19:15where to version of an a or yeah you to read you uh reading rustling working a reason
0:19:20uh i don't know see what is the difference between this one the the
0:19:25and the are proposed
0:19:26proposed and them
0:19:28to run with your writer ounce in the the of a grid to but then look at it can concert
0:19:34that you to reach you to reading
0:19:36windy a us wrestle to know mean
0:19:42yeah O okay