0:00:13 | i |
---|---|

0:00:15 | okay |

0:00:16 | uh so |

0:00:18 | he's is like a |

0:00:19 | for that that line |

0:00:21 | a white per some like to introduce the problem |

0:00:23 | and then explain at what the ancient base it that hard thresholding to them |

0:00:28 | then it's use the medical experiments and this might don't |

0:00:33 | uh one |

0:00:33 | in compressive sensing basically what we want is to uh sample S sparse signals they if you the your measurements |

0:00:39 | and we construct the signal from the measurements Y |

0:00:44 | uh |

0:00:45 | however compressive sensing system are not always immune to noise we always had noise in the measurement him and |

0:00:50 | a a or that was you showed uh part |

0:00:53 | uh most of compressive sensing says team systems uh well combine all the noise but you you change yeah i'd |

0:01:00 | a noise in the |

0:01:01 | signal domain or in the them of and the line you can combine it |

0:01:05 | uh here just as a single time |

0:01:07 | are |

0:01:07 | so you can just model the system us being uh your clean measurements |

0:01:12 | blocks |

0:01:13 | on some noise term or |

0:01:16 | um |

0:01:16 | however uh what happens to the measurements are not |

0:01:19 | yeah are corrupted by sparse of uh impulsive noise |

0:01:23 | okay |

0:01:55 | is |

0:01:58 | oh okay |

0:01:59 | so what happens if the |

0:02:01 | and measurements are corrupted by impulsive or sparse |

0:02:04 | sparse a corrupted noise |

0:02:06 | are most additional compresses just in systems on only assume a gaussian noise or bound the noise contribution |

0:02:13 | a few papers that i'd is this problem |

0:02:15 | uh well really when you have impulsive noise just noise has infinite variance for very large audience |

0:02:20 | at the really so it just breaks all the the uh they got the theoretical guarantees of the least squares |

0:02:26 | based |

0:02:27 | i agree that |

0:02:28 | a here are some motivation here we have an example of a sparse signal and we take |

0:02:33 | a if run on on measurements |

0:02:35 | uh here is the same as sure a and i only up with one because a like all the measurements |

0:02:42 | here are like the construction |

0:02:43 | this is the |

0:02:44 | to get sparse signal reconstructed from the clean measurements sure using on P |

0:02:49 | but here using in a game that the same algorithm we can we are a it from the the measurements |

0:02:54 | you can see that the effect of only one of liar is pressed well all the the components |

0:03:00 | so it breaks all the some shows so really the mode duration of this work is to develop a simple |

0:03:05 | and robust algorithm that are capable |

0:03:07 | of of there a faithful reconstruction from these uh |

0:03:10 | corrupted measurements |

0:03:12 | a what is the solution what but we take a week um a |

0:03:17 | to the rows estimation of the theory to the rescue |

0:03:19 | so we find is uh |

0:03:21 | log L norms |

0:03:23 | is just basically a concave |

0:03:24 | a a function is not really an on what is |

0:03:26 | i we change the lease to make use of the L two make we change it like these of B |

0:03:31 | and actually we focus on the particular case |

0:03:34 | um when P E was to |

0:03:36 | which is that low range and norm cases very well known in the image processing community |

0:03:40 | a a this uh the range and has some cops sort |

0:03:43 | some uh a good at that just the first one is that is and everywhere continuous function and it's everywhere |

0:03:48 | differentiable |

0:03:50 | uh the second is that it's convex |

0:03:52 | near near region so four points that are uh |

0:03:55 | that's a we dean gamma |

0:03:57 | or within in this uh a more uh it's it looks like an L two more |

0:04:01 | and the third and most important one is that a large deviation are no cable be penalized by this not |

0:04:06 | much "'cause" you from this behavior |

0:04:08 | a a large deviation or |

0:04:10 | the lot |

0:04:11 | concavity just makes |

0:04:12 | that the these norm is not have we penalise |

0:04:15 | so we use this norm |

0:04:16 | in a previous work |

0:04:18 | uh we modified the basis pursuit algorithm so instead of |

0:04:22 | minimizing the L one norm subject to a L two constrain |

0:04:26 | what we need is |

0:04:27 | find uh the signal with a one L one norm we other range and constraint on the fidelity they are |

0:04:34 | are we have this |

0:04:35 | reconstruction T is |

0:04:36 | however we have a problem with this as he was very robust to noise |

0:04:40 | but it was really a slow and complex uh to solve |

0:04:44 | and for large uh signals or for a large uh problems it was impossible to small with the memory what's |

0:04:51 | out |

0:04:51 | so uh we will now come up with a uh or at different them based algorithm |

0:04:57 | so we start with these idea L optimization problem |

0:05:01 | really the in our we fine |

0:05:02 | a a at the sparse vector X or an is a sparse vector X |

0:05:07 | that minimize is these objective function which is are and a lot H channel uh fidelity constraint |

0:05:14 | instead of doing these which is i name this an B and M P problem or a combinatoric problem |

0:05:19 | a a we use to adaptive algorithm |

0:05:22 | our basic an iterative hard thresholding algorithm |

0:05:25 | this algorithm just goes |

0:05:27 | out what G is the gradient of the direction function new |

0:05:30 | is thus step |

0:05:31 | that is or a us so step that is changing |

0:05:34 | it with the time |

0:05:35 | and we it go changing here you can think of that as a |

0:05:39 | gradient projection algorithm or as a latin whatever |

0:05:43 | a a projection algorithm to go we tentatively in it and in the opposite direction of the gradient and then |

0:05:48 | you project you your solution onto the subspace space of a sparse signal |

0:05:52 | which is basically these |

0:05:53 | operator H is is doing which is the hard thresholding operator that gives the S O largest components of your |

0:05:59 | signal |

0:06:00 | uh |

0:06:01 | and set the other wants to see you |

0:06:04 | he just like a |

0:06:05 | uh |

0:06:06 | that's a |

0:06:07 | and to we should of what they wouldn't does a degree you know the and function can be expressed |

0:06:13 | as these weight it |

0:06:14 | um being are function on where why minus V times X the |

0:06:19 | is the error |

0:06:20 | L at iteration T |

0:06:22 | and this matrix W |

0:06:24 | is defined and it's a diagonal matrix where each component of the diagonal is just is gonna square over a |

0:06:30 | gamma square |

0:06:31 | a the error or a squared |

0:06:33 | he is like a lot of these uh sort of weight so what does we is that a whatever is |

0:06:38 | inside |

0:06:39 | um gamma here is are and example we down might was one so what it since i got we trust |

0:06:45 | all the issue ornaments |

0:06:47 | that are |

0:06:47 | or when there are |

0:06:49 | is a within a distance of a gamma of the two of the two uh a signal or a parent |

0:06:54 | that true signal |

0:06:55 | and the other ones |

0:06:56 | we don't trust than that much so we |

0:06:58 | give a that's a a a little weight |

0:07:00 | to those me short in as you can look at those may short sir what i be the ones that |

0:07:03 | are both highly are corrupted |

0:07:06 | so we end up with these uh the range and uh it hard thresholding algorithm |

0:07:12 | which is but basically uh the same almost the same algorithm as at least squares based algorithm the only difference |

0:07:17 | really is a we are adding a multiplication by a diagonal and matrix so in terms of computational load |

0:07:24 | it's all the same as the at the part of the algorithm |

0:07:27 | but now the algorithm used a a was against an impulsive noise |

0:07:34 | a here here well each oh how some um |

0:07:38 | um |

0:07:39 | that's a a gone and use in terms of the restricted isometry property |

0:07:42 | uh it's a set there was it that some property is just of the condition there was a right B |

0:07:46 | is at exactly the same as the a these squares it had a T part of than algorithm and we |

0:07:52 | get to these reconstruction bound |

0:07:54 | the first and in the ever bound is a |

0:07:58 | an error that depends on actually the |

0:08:00 | the norm are in X |

0:08:02 | and it goes to zero |

0:08:03 | as T goes uh to infinity |

0:08:06 | and the second one |

0:08:07 | is the noise model |

0:08:09 | this and so you don't is now we have a constraint or a duration constraint |

0:08:13 | on the noise instead of having an L two constraint |

0:08:16 | we just but uh come under ancient all constraint |

0:08:20 | on the on the noise and we get these uh exponential |

0:08:23 | uh bar |

0:08:27 | yeah here are some crucial a site mentioning uh to lights are go uh the selection of got my to |

0:08:33 | go here why because if got is too large |

0:08:36 | then a you are not going to reject too much of liar |

0:08:40 | if got my is |

0:08:41 | to a small |

0:08:42 | you are going to reject on most everything |

0:08:44 | in your uh in your measurements |

0:08:46 | uh |

0:08:47 | we don't have like our theoretical right and D for all the proper solution of proposed image for gamma however |

0:08:52 | uh it because we have seen that this estimator based on one tile |

0:08:56 | a has work uh fine is just the at points |

0:09:00 | spite a one time mine as uh the twelve point five can white what fun time |

0:09:04 | and basically with sending this |

0:09:07 | gamma we are considered that twenty five percent of them assure men's are corrupted or we don't trust that twenty |

0:09:12 | five percent |

0:09:13 | well we draws the reminding uh seventy five percent of the measurements |

0:09:19 | now i know there a like is that is the selection of the step size um you at each iteration |

0:09:25 | uh a the optimal want is to select them use that |

0:09:29 | that uh |

0:09:30 | that a a T the maximal that buttons |

0:09:33 | in the opposite direction however that's the doesn't have a close solution |

0:09:37 | but we select that like solving this problem is our reweighted these a square problem for mean the matrix W |

0:09:42 | and then having |

0:09:43 | the this is problem it has now a close form solution is easily |

0:09:47 | it can T be easily calculated |

0:09:49 | and uh do with this is that |

0:09:51 | and with this new we got and T |

0:09:53 | that uh the lower tension |

0:09:55 | oh fidelity to turn |

0:09:57 | is is is more or or at least equal to a previous iterations so we are going in and i |

0:10:01 | know this end and direction |

0:10:05 | here as experimental setup for what we have |

0:10:08 | uh |

0:10:08 | go |

0:10:09 | here the first uh experiment this experiment is performed using contaminated a gaussian noise where we have a gaussian noise |

0:10:15 | plus on liars |

0:10:17 | and the noise |

0:10:18 | here uh use the contamination factor it which just like the percentage of uh a liar in the noise that |

0:10:24 | comes from ten to the minus three oh to two a fifty percent |

0:10:28 | uh |

0:10:29 | the |

0:10:30 | this i and shows the performance of the these these of based iht algorithm and of course when we have |

0:10:36 | a |

0:10:36 | well |

0:10:37 | a all night |

0:10:38 | the performance just draw |

0:10:40 | are the |

0:10:41 | right uh red one is the performance of the weighted median regression uh a goody than use uh a very |

0:10:47 | was is based on the L one on but as the noise gets more to |

0:10:51 | the perform the case also |

0:10:53 | he the performance of a a it at a different racial algorithm with two choices of gamma |

0:10:59 | uh the that one |

0:11:00 | is a using they got a that it just playing based and the uh order statistics |

0:11:06 | and the blue one |

0:11:07 | ease |

0:11:08 | a a knowing a priori the range of the clean a short and so it you uh know a priori |

0:11:14 | the range of the clean a short as you just set yeah a was that i mean |

0:11:17 | has of course the one the the better uh performance |

0:11:21 | however the performance of our |

0:11:23 | uh are estimated gamma the still is good and it's close to the other one without knowing anything or any |

0:11:29 | having any prior information of the clean |

0:11:31 | um a |

0:11:33 | here are some sample with up a stable noise again the curve of the same few the performs of i |

0:11:38 | is this a square |

0:11:39 | based uh i i used D |

0:11:41 | a a is more i'm i'll for uh one think when i'll five gets close to see that we have |

0:11:46 | a more impulsive environment |

0:11:48 | when a flight was one we have the count chi distribution and with all white was to we have the |

0:11:53 | gaussian distribution which is the class go like a |

0:11:55 | uh distribution |

0:11:57 | so um of course and you but the performance as a a very good |

0:12:01 | and for the more than is not only rub boss |

0:12:03 | when we have uh |

0:12:05 | mm |

0:12:05 | impulsive noise but is also wrote most |

0:12:07 | when we have a gaussian noise |

0:12:09 | so the but think about this nist is for both bold in light and heavy tail |

0:12:13 | an environment |

0:12:16 | a here is an example of a |

0:12:18 | corrupted up to a measurement without the stable noise but now of and the number of me short a or |

0:12:23 | the number of samples |

0:12:24 | we see here |

0:12:26 | this plot L the green one |

0:12:27 | is with a five people what's your point five which is a very high |

0:12:31 | impulsive environment |

0:12:33 | and we see that of course uh we need more samples what to compensate for the noisy um a sure |

0:12:38 | elements uh when i'll like was to |

0:12:40 | we have the gaussian case and was you know the performance of the low range base algorithm |

0:12:46 | it's all the same as the performance of the leases quite a based algorithm which is optimal for the gaussian |

0:12:50 | noise so we don't wear not losing out too much |

0:12:53 | and an option vitamin |

0:12:55 | how have a final example here are we down any mention shall they not this is a two fifty six |

0:13:00 | like to P to six image |

0:13:02 | uh we take some random how to mark me a actually a thirty two thousand |

0:13:06 | how a run or how to manage instrument and we it |

0:13:09 | some a couch in noise to them a so here is the |

0:13:11 | the the the of bottom one is the corrupted men |

0:13:15 | uh we perform a construction of core what that with the lease court base |

0:13:19 | uh |

0:13:20 | i it at a different temperature should go to that of course that are structure is not really good |

0:13:24 | what about |

0:13:25 | a here we use a cleaning |

0:13:27 | algorithm just leading the measurements being all the liars before reconstructing |

0:13:33 | a a the still the performance is something that that "'cause" we're losing information |

0:13:38 | and here |

0:13:39 | is the |

0:13:40 | with the same a sure and the record be that the recovery of the tension in to par thresholding algorithm |

0:13:45 | he just to for comparison i plot of the recovery |

0:13:48 | of the with the least least squares |

0:13:50 | are you used at algorithm but in the noise this case |

0:13:55 | oh okay so that's let me conclude now a a we have presented a a simple and brought most uh |

0:14:00 | it that if a to than but rock was again |

0:14:02 | i'm impulsive noise |

0:14:04 | um the performance some properties are studied |

0:14:07 | and we see that it's uh |

0:14:09 | rebels against heavy-tailed noise but also in like pale |

0:14:12 | uh environments |

0:14:13 | uh one of the future work is to |

0:14:16 | a well |

0:14:17 | leverage in the prior information on to this are in prior information like prior or information or bought of based |

0:14:23 | compressive sensing like a to people in rice are working "'cause" these algorithm is not suitable for all those more |

0:14:29 | is not only the hard it or deep hard the that hard thresholding algorithm |

0:14:33 | uh and thank you very much |

0:14:43 | so question |

0:14:50 | uh when when you assume the noise is imposed to be people are to this previously |

0:14:56 | uh because that means that the the noise itself self sparse source and you can simply so so i is |

0:15:01 | so i a fast to put two |

0:15:03 | at least squares i H T you will meant to |

0:15:06 | five with the identity density and so put the the the |

0:15:09 | estimate you noise as well as you a sparse coefficients |

0:15:12 | i would've thought that |

0:15:13 | this would be uh a much fairer comparison of you have be looked that so |

0:15:17 | yes i've of of a look done uh those a those than i haven't the don |

0:15:22 | no comprise but using images what have done |

0:15:24 | comparison |

0:15:26 | like our for this |

0:15:27 | impulsive noise |

0:15:28 | in this characteristics for the contaminated gaussian on measurements |

0:15:31 | the performance is |

0:15:33 | uh actually on the same is a is just the breakdown point is actually in how sparse |

0:15:38 | you're your ms short of the the the corrupted the are of course it yeah |

0:15:43 | you you have uh less of liars in mature immense your recall already |

0:15:47 | is gonna be better |

0:15:48 | but as you go |

0:15:50 | a a a a a for their i mean more than a fifty percent of the measurements are corrupt |

0:15:55 | again you well your performance gonna drop could you to have like in of measurements to recover |

0:15:59 | that is sparse signal which is like the same behavior a we have uh the breakdown point here the percent |

0:16:05 | comes not just for that breakdown point for the |

0:16:08 | algorithm go the what in the in your chance it's uh the performance |

0:16:12 | are are also by go what very |

0:16:14 | similar to to to this one only problem with that that it this |

0:16:18 | could be like a a |

0:16:19 | tentative a gordon's people so |

0:16:20 | first like |

0:16:21 | find the sparse signal at then the |

0:16:24 | the core of the short men's on it or rate or some people just sold it |

0:16:28 | just one L one problem and fine a a lot of them |

0:16:31 | but yeah |

0:16:32 | don't compare some put the object you to me you |

0:16:35 | the you assume you could do the same thing with a the actually is not it's not the uh no |

0:16:40 | one |

0:16:41 | uh |

0:16:42 | such as strict model is a well yeah yeah it's uh so you could do any it's again is plus |

0:16:46 | and he's but it's also that i and and and just put the uh all the men's the |

0:16:51 | uh sensing again the the coefficients with |

0:16:54 | impossible is much |

0:16:56 | yes the this sys |

0:16:57 | basically |

0:16:58 | right |

0:17:06 | of of the the and almost particularly very in |

0:17:09 | chill of is it's community and the use it very extensively |

0:17:12 | at least still |

0:17:13 | and to use so |

0:17:15 | did you make |

0:17:16 | do can persons also |

0:17:17 | the student |

0:17:19 | but it's very well known that it's good for me |

0:17:21 | impulsive noise |

0:17:22 | yes sir of physics and for sparse signals |

0:17:25 | so in that put it in the framework of K |

0:17:27 | press an of course |

0:17:29 | remotes |

0:17:31 | oh |

0:17:32 | i i |

0:17:34 | well i have a look at other literature from the your P six |

0:17:37 | oh part |

0:17:39 | really a we have only like |

0:17:41 | preview wars in our room with the lower inch and norm in the compressive sensing variable like you |

0:17:47 | in the |

0:17:48 | well not only for the |

0:17:50 | no is |

0:17:51 | part but also for the that's uh that's so you sparsity encouraging |

0:17:55 | um one |

0:17:56 | but that's so the own thing "'cause" what i to be only that have only have a look at the |

0:18:00 | image processing literature not but you physics literature |

0:18:07 | i Q |

0:18:16 | uh uh i have two questions |

0:18:18 | uh is the first questions for these |

0:18:20 | for in this experiment |

0:18:22 | a uh you um you shoes the the the power of the the uh the noise |

0:18:28 | but the we i don't know the |

0:18:30 | the power of them as a men's is uh |

0:18:33 | as is the um |

0:18:35 | is being or was be a is a is a big as the the noise more |

0:18:39 | or or or smaller than the noise |

0:18:41 | paul right i mean this |

0:18:42 | as a of the measurements even the scale of the measurements are are yeah yeah it's just this is more |

0:18:47 | done done done than the north |

0:18:49 | okay yeah for this case |

0:18:51 | and second one is the four |

0:18:53 | the um |

0:18:54 | uh and the principle of the intrusion |

0:18:56 | the um your your or on algorithm |

0:18:59 | um |

0:19:00 | there is yeah it's here |

0:19:02 | uh uh for for W and the big W |

0:19:05 | a is uh |

0:19:07 | as in |

0:19:07 | it's um a little bit |

0:19:09 | similar to the algorithm you three to you really use do have reason |

0:19:15 | where to version of an a or yeah you to read you uh reading rustling working a reason |

0:19:20 | uh i don't know see what is the difference between this one the the |

0:19:25 | and the are proposed |

0:19:26 | proposed and them |

0:19:27 | um |

0:19:28 | to run with your writer ounce in the the of a grid to but then look at it can concert |

0:19:34 | that you to reach you to reading |

0:19:36 | windy a us wrestle to know mean |

0:19:39 | sure |

0:19:42 | yeah O okay |