they don't uh

uh

and

i

then you are there

oh

i

right and see

uh you know

i

be overwhelming majority work

has has a a they'll with construct a matrix

that are sub gaussian

it's are made are gaussian

or or

a a binary and in some some fashion or or or or partial

uh

for matrix

and and the like

so that special export here

is uh

thank you

so the but the the question away

is what happens if that the the

matrices that we using compressive sensing

uh are not so cows

and there some uh uh interesting a statistical and and uh a geometrical characteristics that but

it will in march out of of a of of those uh

prop

so we will

the first point we will motivate the we will uh

some right of characteristics when these matrices are are not so gaussian

uh we will talk about stable run process that are to one class of four

or for all variables that are are not sub gaussian in the uh we'll will go for some

uh rap

equivalent characteristics of a piece of compressive measurements

and show some computer simulation

um

so in compressive sensing the uh

fundamental property the alright you the right

uh

uh

shows how these dimensional reduction that is achieved when you look

pressure

upper side information in space

right in there and several people in this uh

so i should have talked about uh information per server

uh_huh

the different space

and uh technical sub gaussian the uh measurements you got you have these uh

i some a three

you have the a some entry

let you go from

uh the L two space

and when you of the projections you're are you have it's as a matter of preservation on L to specs

as well

um

but i has to be uh

gaussian or

or something like that

uh

a chart tried of and the calling of is uh a generalized this idea to say okay well

uh how modified look at this as a sum to still on the L two based but i could the

regularization right

in an L P

and uh you could uh

get that

more sparse characteristics and in the in in this just

so using this this came right yeah what we will do is we will uh generalise it

to apply to a um

compressive scenario

where R is no longer

so gaussian in in in fact be a very

uh i'm whole save or are rich wrote a novel impulsive also characteristics

it will will show that the use somewhat tree that is preserved here is uh and that a L to

space it's and

um

and uh and L P space L L P uh metric

going to uh and L L from metric and these are very well

very distinctly

related

uh key will be less less in L two

that's a correct

so of

and to uh can go from one to two

um and and a stable distributions uh

how have determines the uh uh

the level of the degree of um that

gaussian at

if you will

uh to meaning gaussian uh um

and the

the lower the out of the more impulsive characteristics you will have a

and of course the that the ripple also give you

at uh uh

probability of of reconstruction and also a number of measurements

you need

i

yes

well it's of this

a i i a more

now this this is the geometric interpretation

um

um

in a

in compressive sensing typically

you have a uh that the L two

is uh

if you have the L two distance in uh in the original space you are going to preserve these

L two distance in the in the project space

uh in in in a

i in the image on a reduction actual literature uh

there's a lot of interest in doing that this the mitchell i reduction

in uh uh uh north that are not L two

uh maybe the L one

that that out maybe more uh a truck before the application

and B for the image processing literature

is well known that the L two um norms based consider are are very and you may wanna go to

other other more

so that was a motivation here you know can we get

a other uh L two distance preservation in this in in this compress sense

so in this case for instance of were talking of the L one norm here how i how maybe preserving

in that the projections on the and this L P more

and the associate with this the main char with action then you will have the uh

a or the restricted isometry property

on the reconstruction you would like to get the

construction back

so uh the that two to the result that we will look at two is basically this result here which

is

in since a uh

there similar to the traditional additional right P

when set the L two norm few we're gonna have the L L P

a P more

so what are the what are the things that

this will

will uh will get gain if you use this uh

projection rather than that are not to a L C

uh

one is a you will preserve distances in in space is are not L L two

uh but there's also the properties that are interest

um

uh we know when you pray when you process data with that

uh a more that are lower than L two but the processing a lot is a lot more robust so

that's form dress as one of the parks of what's

given here by or real and bar

that if you have a is in your in your observations that are not

uh gaussian

then you were uh a reconstruction gonna be much more robust than what you would get with

can in projection

uh you use L P therefore you you per are more the sparsity of the reconstruction

and but um as i said that that the

but distant reservation is is a different

or space

okay so in in a stable when we talk about a a stay uh a sub gaussian projections what is

key is that

if you print if you're generating a round on uh make trick

rather than having a a a a uh the entries in B of a gaussian distribution they're gonna be an

alpha stable distribution

which uh have pales we're than than the gaussian so for instance here

the uh as you change the parameter uh i'll cycle two with this this black

of which is the gaussian

if you have a a a a a i'll think with one point five is gonna be used blue curve

and so on so as as you lower the L are you gonna have a have your heavy tails

uh to generate that that the projection matrix

um

be a characteristic function of all stable the distributions have a this have the shape

uh is

a class of uh uh density functions of distributions

uh a characteristic function is very well to if is very compact very nice but that dish regions are not

that's very interesting class

um here the out that if you put of equal to two that's a characteristic function of a gaussian

so in general this is the easiest way to

cracked tries that the of or stable distribution

and i think that's very interesting that with the stable distributions it it has a a a a stability property

much like the gaussian has a a is gaussian central in your

uh stable distribution have a a generalized central

limit your

or or stable wall you will

and it's that that it it's interesting because if you have a

um

uh

let this are

be a stable distributions you stable distribution with some

that

this this this this five i'll of uh in this fight

this partial and so you don't have the the if you have person

uh if you have a a a bunch of them uh

the output what also be stable of the same

parameter alpha

and but of this portion will be and and so like an L P

uh

a metric of of the dispersion

right so

uh for instance if the it if all of these have a a a a uh this person one

and you are and you out all of this the N

the the output the why

which is the sum of all of these will be um

a of the E the L A of a more of four

of of of the data

so that was partial power captures the that the of

the other or

um

it is an example what you have a a a cat can data if you do the cal you random

the projections in

the output will be that cal she with that of the L one this

this is a special case of

i is stable distribution

okay um

so when you tack of a stable distribution since are heavy tails

uh you can use second or or or second order moments are second or statistic because they're very heavy tails

and the means and variances are that where will find so what you have to use

is you have to have

what you have to use you have to uh use fractional uh mode

so

uh if if uh X is a uh

stable distribution

then the expected value of X to the P so that X of the P

has uh

you compute the the

the moments

then you will this will be related to the dispersion the P

and uh therefore if you have

the L P more

oh of these this is a of the sum of a a lot of these terms

then the expected value of that the they'll P then it'll be another constant that term

a variable P and alpha

in L so the dispersion of the of the very

so this can be used in order to do the analysis that we need for the

for the uh are i P that would will go to

so this is a a that they are we were gonna we gonna look for is uh we're gonna have

a

run the matrix that again but like you generate

uh

compressive random the to see that are gaussian that we gonna be generating

i got

matrices was is that are of the stable

i at uh with out between one and two

and you will

uh a show that if you have a

K A the dimension of is matrix B

well sir it than the mentioned

uh which is as

uh a and of over S a very similar to that

for a traditional are P where is that the sparsity

then this this uh

um distance card to station will be preserved

and with a given problem

a came to do that to prove that we will use this are similar steps as as as we do

in the in the uh traditional are P construction

except that we will be using these other

uh different than for and the uh

uh

fractional moment

and that the procedure is of course to look at the probability probabilistic approach that for a fixed X

in uh that is sparse uh

that the rip hold

then we will look at the uh

that the rip

uh

uh

i

is achieved

for any X in a in our and the S and then for any submatrix of B R are so

it's very similar to be other

or just on the traditional

on the traditional um

alright are P the relations so we will just get

how we do this

um so the first lemma tell so that if you have the index of uh

uh

S being the sparsity

and uh oh we're we're looking at

this uh be a submatrix of K by ace

uh of these uh projection matrix

then uh this will hold this this will whole whole these are are some constants that we will derive

in in a wheel

with this will hold with a given probability

and uh

the problem that is is uh are related to the to them up the fractional moments we went uh

we just discussed uh

a minute ago

we you have a i is equal to the projection major

each of the interest of these why will be the linear combination of the X with the stable

components

therefore for this uh a random variable Y to inter a projection will be alpha stable with a zero mean

but the

dispersion of uh

yeah uh the the L L

right the

uh the L P L of the of Y

then is

um

we know that by this somewhere where you have each of these is the i is uh

is given there

so one um

this is just a that the sum of this

elements of of the why way of the of the people are

and that you can then take the expected value of of the projection

which um is just uh

again it's a this are

uh

the P uh

uh

the the the out of an a normal of the of of of the back

oh so we have a then is is we have the expected value of and then we have the variance

in in the variance you can

similarly divide there so you have a you have the mean and you have a variance and you trying to

get this bound

in order to detect the probabilities that

that that the distance

as will be preserved in that this

um um

then if you approximate of the distribution of of of these uh P more by an inverse gaussian

then you will have a a of by bound that you can then relate how likely use

how like there these distances to be in that in that in a given "'em" ball if you will

in that will be a function

well this parameter a to we'd be parameter at the the distribution of these me are

and signal that we just

review in the previous slide

and the

but

the turn of band provides as that probability um

which uh will be a function of the K right

that will tell you for a given probably that i i'm within the ball and need so many projections

the

on the uh set

the

but on the compressive sensing measure

so a uh

we then general that not just for a given X but for any arbitrary X that could but that preserves

that uh it'll P norm

in that does just changes this probability

you just have to be a little more

i don't the on the

uh

on the distances in then

for any submatrix

i you just change also the

but

the constants that

an N to prove the that the or and then you just have to

uh

many P like the constants to put in to this form

which is what we you what will do

but in in and at the same time that in says the minimum number of measurements

to it's to attain the rip which is again uh

a function of these S slot and

so this is an example what happens if you can if you if you um

if you do these projections so you have the the shall reduction you you have this matrices that satisfy this

is the this reason

uh and then you can do the reconstruction that only just mitchell to but you wanna to the reconstruction

but when you do the be a a reconstruction

you're project thing with this uh

not some gaussian sub gaussian entries

in therefore for you can not use traditional compressive sensing algorithms

uh like the L two a one or or uh

methods that are rely on and to uh

distance distances

um point of the greedy out within

to is you really with ends would fail

uh because of the impulsive nature of that the measure

so in this uh in this uh example uh oh we're gonna we getting um compressive measurements Y

where are are are the alpha stable uh projections

without equal to one point two

uh well the noise is one

what one point two and um

and

then you have this is a density function

a number of measurements is four hundred K is the number measurements

S is the sparsity

and

this is a uh uh um reconstruction algorithms that

are not develop in this paper but in different paper that uses

uh

uh not L two

data fitting uh

uh terms uh but uses a a um

uh i L L lorentzian regular say

a lorentzian based metrics to do the the

the data fit

then you can see that the the data that is the original or the circles blue circles

bin them cover data is is

is well preserved

if you the L one a is you will week

we curve or we cover the sparsity terms

you will have a lot of a ears uh

it's star

and then you can do that and you can

test the

um

the number of uh measurements that the you require and

again for these method we have

a the medical uh K that you need to to fit the in this construction you you were able to

uh do the inversion

uh

in a within the bounds of

the tip that they L C

so in summary O

what uh

what if you use uh

if you don't use of gaussian

a measurement

uh we we explore that you can get a a a uh this is trees trees or these distances in

a in a not an end to but another another other uh

norms or

and uh uh at the same time you will you will find that the they're more last

you have a lot more robust

against noise that very close it

and uh uh also we use you yeah

the

sparsity inducing a a a more that but you want for

question

i

well you're

what you're but your uh

you're using your use in the journal bounds on the L

on that uh

uh

fractional

uh tire

so you're doing

the close to the

P

so or you can use a

approximations matt not couch in that

right him where you can put inverse gaussian or other channel out some of that is true

a well the a

sure because you do your use using those on the on the L P type

uh very

he is uh

yes

yes

yeah yes

yeah what happens is a um

you generate you generate your your matrices using a stable distribution right so you have

some that is not gaussian you generate the projections for you gonna get a vector of measurements

but these measurements are are are

linear combination alpha stable distributions so they're not

uh they're not so gaussian than i gauss

right

so the inverse a algorithms can use traditional uh L two

uh data fitting terms

with the regular right so you have to use norms that are robust you have to use norms

perhaps like the ones they what they were

mentioned the

later or earlier

but are are one or or a more robust and you're of

so in this case

uh this illustrates that if you if you norms of data fitting that are

are there are more robust like the L L the or in that was uh

yes yes uh

yes that that will be that will double will you will get a good result if you do yes

within that the in the algorithm the that we use as a lorentzian type

but we could that derive an algorithm that uses and an L

a people less than

i

i

yeah well the L one is the approximation for the for the right

um

i we have really run run that experiment because are to turn right there there's the data fitting "'em" which

is the that one that you were used

put a zero point six

in and there's the sparsity term which

could be the L zero

even the L one

right so if if you if you if you can but without of them that was the and the data

fitting normal zero point six and use the L zero we a very good

or or L

so point six of than then one maybe

fine

but he these odd with them the you we try

was one that was somewhat to double that that we could go down

to that as as was presented with a more in more

uh in this here shows you that if you the the L two

you can of the port you gonna get a lot of spurs result because of that

because of the structure of the projection

which is not

some gauss

yes that's a good question um

uh well we explore where the fun that the goal um

characteristics of of such mate

uh a to generate them and how do you uh a user in practise that's that's something to be explored

uh a sort you you could try

uh

for a gaussian mixtures are easy

a very good approximation of of not sub gaussian uh

majors but in general to be in practice you have to to come up that's an open open question

okay thank