i

okay

uh so

he's is like a

for that that line

a white per some like to introduce the problem

and then explain at what the ancient base it that hard thresholding to them

then it's use the medical experiments and this might don't

uh one

in compressive sensing basically what we want is to uh sample S sparse signals they if you the your measurements

and we construct the signal from the measurements Y

uh

however compressive sensing system are not always immune to noise we always had noise in the measurement him and

a a or that was you showed uh part

uh most of compressive sensing says team systems uh well combine all the noise but you you change yeah i'd

a noise in the

signal domain or in the them of and the line you can combine it

uh here just as a single time

are

so you can just model the system us being uh your clean measurements

blocks

on some noise term or

um

however uh what happens to the measurements are not

yeah are corrupted by sparse of uh impulsive noise

okay

is

oh okay

so what happens if the

and measurements are corrupted by impulsive or sparse

sparse a corrupted noise

are most additional compresses just in systems on only assume a gaussian noise or bound the noise contribution

a few papers that i'd is this problem

uh well really when you have impulsive noise just noise has infinite variance for very large audience

at the really so it just breaks all the the uh they got the theoretical guarantees of the least squares

based

i agree that

a here are some motivation here we have an example of a sparse signal and we take

a if run on on measurements

uh here is the same as sure a and i only up with one because a like all the measurements

here are like the construction

this is the

to get sparse signal reconstructed from the clean measurements sure using on P

but here using in a game that the same algorithm we can we are a it from the the measurements

you can see that the effect of only one of liar is pressed well all the the components

so it breaks all the some shows so really the mode duration of this work is to develop a simple

and robust algorithm that are capable

of of there a faithful reconstruction from these uh

corrupted measurements

a what is the solution what but we take a week um a

to the rows estimation of the theory to the rescue

so we find is uh

log L norms

is just basically a concave

a a function is not really an on what is

i we change the lease to make use of the L two make we change it like these of B

and actually we focus on the particular case

um when P E was to

which is that low range and norm cases very well known in the image processing community

a a this uh the range and has some cops sort

some uh a good at that just the first one is that is and everywhere continuous function and it's everywhere

differentiable

uh the second is that it's convex

near near region so four points that are uh

that's a we dean gamma

or within in this uh a more uh it's it looks like an L two more

and the third and most important one is that a large deviation are no cable be penalized by this not

much "'cause" you from this behavior

a a large deviation or

the lot

concavity just makes

that the these norm is not have we penalise

so we use this norm

in a previous work

uh we modified the basis pursuit algorithm so instead of

minimizing the L one norm subject to a L two constrain

what we need is

find uh the signal with a one L one norm we other range and constraint on the fidelity they are

are we have this

reconstruction T is

however we have a problem with this as he was very robust to noise

but it was really a slow and complex uh to solve

and for large uh signals or for a large uh problems it was impossible to small with the memory what's

out

so uh we will now come up with a uh or at different them based algorithm

so we start with these idea L optimization problem

really the in our we fine

a a at the sparse vector X or an is a sparse vector X

that minimize is these objective function which is are and a lot H channel uh fidelity constraint

instead of doing these which is i name this an B and M P problem or a combinatoric problem

a a we use to adaptive algorithm

our basic an iterative hard thresholding algorithm

this algorithm just goes

out what G is the gradient of the direction function new

is thus step

that is or a us so step that is changing

it with the time

and we it go changing here you can think of that as a

gradient projection algorithm or as a latin whatever

a a projection algorithm to go we tentatively in it and in the opposite direction of the gradient and then

you project you your solution onto the subspace space of a sparse signal

which is basically these

operator H is is doing which is the hard thresholding operator that gives the S O largest components of your

signal

uh

and set the other wants to see you

he just like a

uh

that's a

and to we should of what they wouldn't does a degree you know the and function can be expressed

as these weight it

um being are function on where why minus V times X the

is the error

L at iteration T

and this matrix W

is defined and it's a diagonal matrix where each component of the diagonal is just is gonna square over a

gamma square

a the error or a squared

he is like a lot of these uh sort of weight so what does we is that a whatever is

inside

um gamma here is are and example we down might was one so what it since i got we trust

all the issue ornaments

that are

or when there are

is a within a distance of a gamma of the two of the two uh a signal or a parent

that true signal

and the other ones

we don't trust than that much so we

give a that's a a a little weight

to those me short in as you can look at those may short sir what i be the ones that

are both highly are corrupted

so we end up with these uh the range and uh it hard thresholding algorithm

which is but basically uh the same almost the same algorithm as at least squares based algorithm the only difference

really is a we are adding a multiplication by a diagonal and matrix so in terms of computational load

it's all the same as the at the part of the algorithm

but now the algorithm used a a was against an impulsive noise

a here here well each oh how some um

um

that's a a gone and use in terms of the restricted isometry property

uh it's a set there was it that some property is just of the condition there was a right B

is at exactly the same as the a these squares it had a T part of than algorithm and we

get to these reconstruction bound

the first and in the ever bound is a

an error that depends on actually the

the norm are in X

and it goes to zero

as T goes uh to infinity

and the second one

is the noise model

this and so you don't is now we have a constraint or a duration constraint

on the noise instead of having an L two constraint

we just but uh come under ancient all constraint

on the on the noise and we get these uh exponential

uh bar

yeah here are some crucial a site mentioning uh to lights are go uh the selection of got my to

go here why because if got is too large

then a you are not going to reject too much of liar

if got my is

to a small

you are going to reject on most everything

in your uh in your measurements

uh

we don't have like our theoretical right and D for all the proper solution of proposed image for gamma however

uh it because we have seen that this estimator based on one tile

a has work uh fine is just the at points

spite a one time mine as uh the twelve point five can white what fun time

and basically with sending this

gamma we are considered that twenty five percent of them assure men's are corrupted or we don't trust that twenty

five percent

well we draws the reminding uh seventy five percent of the measurements

now i know there a like is that is the selection of the step size um you at each iteration

uh a the optimal want is to select them use that

that uh

that a a T the maximal that buttons

in the opposite direction however that's the doesn't have a close solution

but we select that like solving this problem is our reweighted these a square problem for mean the matrix W

and then having

the this is problem it has now a close form solution is easily

it can T be easily calculated

and uh do with this is that

and with this new we got and T

that uh the lower tension

oh fidelity to turn

is is is more or or at least equal to a previous iterations so we are going in and i

know this end and direction

here as experimental setup for what we have

uh

go

here the first uh experiment this experiment is performed using contaminated a gaussian noise where we have a gaussian noise

plus on liars

and the noise

here uh use the contamination factor it which just like the percentage of uh a liar in the noise that

comes from ten to the minus three oh to two a fifty percent

uh

the

this i and shows the performance of the these these of based iht algorithm and of course when we have

a

well

a all night

the performance just draw

are the

right uh red one is the performance of the weighted median regression uh a goody than use uh a very

was is based on the L one on but as the noise gets more to

the perform the case also

he the performance of a a it at a different racial algorithm with two choices of gamma

uh the that one

is a using they got a that it just playing based and the uh order statistics

and the blue one

ease

a a knowing a priori the range of the clean a short and so it you uh know a priori

the range of the clean a short as you just set yeah a was that i mean

has of course the one the the better uh performance

however the performance of our

uh are estimated gamma the still is good and it's close to the other one without knowing anything or any

having any prior information of the clean

um a

here are some sample with up a stable noise again the curve of the same few the performs of i

is this a square

based uh i i used D

a a is more i'm i'll for uh one think when i'll five gets close to see that we have

a more impulsive environment

when a flight was one we have the count chi distribution and with all white was to we have the

gaussian distribution which is the class go like a

uh distribution

so um of course and you but the performance as a a very good

and for the more than is not only rub boss

when we have uh

mm

impulsive noise but is also wrote most

when we have a gaussian noise

so the but think about this nist is for both bold in light and heavy tail

an environment

a here is an example of a

corrupted up to a measurement without the stable noise but now of and the number of me short a or

the number of samples

we see here

this plot L the green one

is with a five people what's your point five which is a very high

impulsive environment

and we see that of course uh we need more samples what to compensate for the noisy um a sure

elements uh when i'll like was to

we have the gaussian case and was you know the performance of the low range base algorithm

it's all the same as the performance of the leases quite a based algorithm which is optimal for the gaussian

noise so we don't wear not losing out too much

and an option vitamin

how have a final example here are we down any mention shall they not this is a two fifty six

like to P to six image

uh we take some random how to mark me a actually a thirty two thousand

how a run or how to manage instrument and we it

some a couch in noise to them a so here is the

the the the of bottom one is the corrupted men

uh we perform a construction of core what that with the lease court base

uh

i it at a different temperature should go to that of course that are structure is not really good

what about

a here we use a cleaning

algorithm just leading the measurements being all the liars before reconstructing

a a the still the performance is something that that "'cause" we're losing information

and here

is the

with the same a sure and the record be that the recovery of the tension in to par thresholding algorithm

he just to for comparison i plot of the recovery

of the with the least least squares

are you used at algorithm but in the noise this case

oh okay so that's let me conclude now a a we have presented a a simple and brought most uh

it that if a to than but rock was again

i'm impulsive noise

um the performance some properties are studied

and we see that it's uh

rebels against heavy-tailed noise but also in like pale

uh environments

uh one of the future work is to

a well

leverage in the prior information on to this are in prior information like prior or information or bought of based

compressive sensing like a to people in rice are working "'cause" these algorithm is not suitable for all those more

is not only the hard it or deep hard the that hard thresholding algorithm

uh and thank you very much

so question

uh when when you assume the noise is imposed to be people are to this previously

uh because that means that the the noise itself self sparse source and you can simply so so i is

so i a fast to put two

at least squares i H T you will meant to

five with the identity density and so put the the the

estimate you noise as well as you a sparse coefficients

i would've thought that

this would be uh a much fairer comparison of you have be looked that so

yes i've of of a look done uh those a those than i haven't the don

no comprise but using images what have done

comparison

like our for this

impulsive noise

in this characteristics for the contaminated gaussian on measurements

the performance is

uh actually on the same is a is just the breakdown point is actually in how sparse

you're your ms short of the the the corrupted the are of course it yeah

you you have uh less of liars in mature immense your recall already

is gonna be better

but as you go

a a a a a for their i mean more than a fifty percent of the measurements are corrupt

again you well your performance gonna drop could you to have like in of measurements to recover

that is sparse signal which is like the same behavior a we have uh the breakdown point here the percent

comes not just for that breakdown point for the

algorithm go the what in the in your chance it's uh the performance

are are also by go what very

similar to to to this one only problem with that that it this

could be like a a

tentative a gordon's people so

first like

find the sparse signal at then the

the core of the short men's on it or rate or some people just sold it

just one L one problem and fine a a lot of them

but yeah

don't compare some put the object you to me you

the you assume you could do the same thing with a the actually is not it's not the uh no

one

uh

such as strict model is a well yeah yeah it's uh so you could do any it's again is plus

and he's but it's also that i and and and just put the uh all the men's the

uh sensing again the the coefficients with

impossible is much

yes the this sys

basically

right

of of the the and almost particularly very in

chill of is it's community and the use it very extensively

at least still

and to use so

did you make

do can persons also

the student

but it's very well known that it's good for me

impulsive noise

yes sir of physics and for sparse signals

so in that put it in the framework of K

press an of course

remotes

oh

i i

well i have a look at other literature from the your P six

oh part

really a we have only like

preview wars in our room with the lower inch and norm in the compressive sensing variable like you

in the

well not only for the

no is

part but also for the that's uh that's so you sparsity encouraging

um one

but that's so the own thing "'cause" what i to be only that have only have a look at the

image processing literature not but you physics literature

i Q

uh uh i have two questions

uh is the first questions for these

for in this experiment

a uh you um you shoes the the the power of the the uh the noise

but the we i don't know the

the power of them as a men's is uh

as is the um

is being or was be a is a is a big as the the noise more

or or or smaller than the noise

paul right i mean this

as a of the measurements even the scale of the measurements are are yeah yeah it's just this is more

done done done than the north

okay yeah for this case

and second one is the four

the um

uh and the principle of the intrusion

the um your your or on algorithm

um

there is yeah it's here

uh uh for for W and the big W

a is uh

as in

it's um a little bit

similar to the algorithm you three to you really use do have reason

where to version of an a or yeah you to read you uh reading rustling working a reason

uh i don't know see what is the difference between this one the the

and the are proposed

proposed and them

um

to run with your writer ounce in the the of a grid to but then look at it can concert

that you to reach you to reading

windy a us wrestle to know mean

sure

yeah O okay