0:00:14they don't uh
0:00:22then you are there
0:00:30right and see
0:00:31uh you know
0:00:34be overwhelming majority work
0:00:36has has a a they'll with construct a matrix
0:00:39that are sub gaussian
0:00:41it's are made are gaussian
0:00:44or or
0:00:45a a binary and in some some fashion or or or or partial
0:00:49for matrix
0:00:50and and the like
0:00:52so that special export here
0:00:54is uh
0:00:56thank you
0:00:57so the but the the question away
0:01:00is what happens if that the the
0:01:03matrices that we using compressive sensing
0:01:05uh are not so cows
0:01:07and there some uh uh interesting a statistical and and uh a geometrical characteristics that but
0:01:13it will in march out of of a of of those uh
0:01:17so we will
0:01:19the first point we will motivate the we will uh
0:01:21some right of characteristics when these matrices are are not so gaussian
0:01:25uh we will talk about stable run process that are to one class of four
0:01:30or for all variables that are are not sub gaussian in the uh we'll will go for some
0:01:35uh rap
0:01:36equivalent characteristics of a piece of compressive measurements
0:01:40and show some computer simulation
0:01:46so in compressive sensing the uh
0:01:48fundamental property the alright you the right
0:01:53shows how these dimensional reduction that is achieved when you look
0:01:57upper side information in space
0:01:59right in there and several people in this uh
0:02:02so i should have talked about uh information per server
0:02:06the different space
0:02:08and uh technical sub gaussian the uh measurements you got you have these uh
0:02:13i some a three
0:02:14you have the a some entry
0:02:17let you go from
0:02:18uh the L two space
0:02:19and when you of the projections you're are you have it's as a matter of preservation on L to specs
0:02:24as well
0:02:26but i has to be uh
0:02:28gaussian or
0:02:29or something like that
0:02:34a chart tried of and the calling of is uh a generalized this idea to say okay well
0:02:40uh how modified look at this as a sum to still on the L two based but i could the
0:02:44regularization right
0:02:46in an L P
0:02:47and uh you could uh
0:02:51get that
0:02:52more sparse characteristics and in the in in this just
0:02:56so using this this came right yeah what we will do is we will uh generalise it
0:03:00to apply to a um
0:03:02compressive scenario
0:03:04where R is no longer
0:03:06so gaussian in in in fact be a very
0:03:09uh i'm whole save or are rich wrote a novel impulsive also characteristics
0:03:13it will will show that the use somewhat tree that is preserved here is uh and that a L to
0:03:17space it's and
0:03:20and uh and L P space L L P uh metric
0:03:23going to uh and L L from metric and these are very well
0:03:27very distinctly
0:03:31uh key will be less less in L two
0:03:35that's a correct
0:03:36so of
0:03:36and to uh can go from one to two
0:03:40um and and a stable distributions uh
0:03:43how have determines the uh uh
0:03:45the level of the degree of um that
0:03:48gaussian at
0:03:49if you will
0:03:49uh to meaning gaussian uh um
0:03:52and the
0:03:53the lower the out of the more impulsive characteristics you will have a
0:03:56and of course the that the ripple also give you
0:03:59at uh uh
0:04:00probability of of reconstruction and also a number of measurements
0:04:03you need
0:04:11well it's of this
0:04:14a i i a more
0:04:17now this this is the geometric interpretation
0:04:23in a
0:04:23in compressive sensing typically
0:04:26you have a uh that the L two
0:04:28is uh
0:04:29if you have the L two distance in uh in the original space you are going to preserve these
0:04:34L two distance in the in the project space
0:04:37uh in in in a
0:04:38i in the image on a reduction actual literature uh
0:04:42there's a lot of interest in doing that this the mitchell i reduction
0:04:45in uh uh uh north that are not L two
0:04:48uh maybe the L one
0:04:50that that out maybe more uh a truck before the application
0:04:54and B for the image processing literature
0:04:56is well known that the L two um norms based consider are are very and you may wanna go to
0:05:01other other more
0:05:04so that was a motivation here you know can we get
0:05:06a other uh L two distance preservation in this in in this compress sense
0:05:11so in this case for instance of were talking of the L one norm here how i how maybe preserving
0:05:16in that the projections on the and this L P more
0:05:21and the associate with this the main char with action then you will have the uh
0:05:26a or the restricted isometry property
0:05:29on the reconstruction you would like to get the
0:05:31construction back
0:05:33so uh the that two to the result that we will look at two is basically this result here which
0:05:39in since a uh
0:05:40there similar to the traditional additional right P
0:05:43when set the L two norm few we're gonna have the L L P
0:05:46a P more
0:05:49so what are the what are the things that
0:05:51this will
0:05:52will uh will get gain if you use this uh
0:05:55projection rather than that are not to a L C
0:06:00one is a you will preserve distances in in space is are not L L two
0:06:04uh but there's also the properties that are interest
0:06:08uh we know when you pray when you process data with that
0:06:13uh a more that are lower than L two but the processing a lot is a lot more robust so
0:06:18that's form dress as one of the parks of what's
0:06:20given here by or real and bar
0:06:22that if you have a is in your in your observations that are not
0:06:26uh gaussian
0:06:27then you were uh a reconstruction gonna be much more robust than what you would get with
0:06:33can in projection
0:06:35uh you use L P therefore you you per are more the sparsity of the reconstruction
0:06:40and but um as i said that that the
0:06:43but distant reservation is is a different
0:06:45or space
0:06:48okay so in in a stable when we talk about a a stay uh a sub gaussian projections what is
0:06:53key is that
0:06:54if you print if you're generating a round on uh make trick
0:06:58rather than having a a a a uh the entries in B of a gaussian distribution they're gonna be an
0:07:03alpha stable distribution
0:07:04which uh have pales we're than than the gaussian so for instance here
0:07:09the uh as you change the parameter uh i'll cycle two with this this black
0:07:14of which is the gaussian
0:07:15if you have a a a a a i'll think with one point five is gonna be used blue curve
0:07:18and so on so as as you lower the L are you gonna have a have your heavy tails
0:07:22uh to generate that that the projection matrix
0:07:27be a characteristic function of all stable the distributions have a this have the shape
0:07:32uh is
0:07:33a class of uh uh density functions of distributions
0:07:37uh a characteristic function is very well to if is very compact very nice but that dish regions are not
0:07:43that's very interesting class
0:07:46um here the out that if you put of equal to two that's a characteristic function of a gaussian
0:07:50so in general this is the easiest way to
0:07:52cracked tries that the of or stable distribution
0:07:56and i think that's very interesting that with the stable distributions it it has a a a a stability property
0:08:01much like the gaussian has a a is gaussian central in your
0:08:06uh stable distribution have a a generalized central
0:08:09limit your
0:08:10or or stable wall you will
0:08:12and it's that that it it's interesting because if you have a
0:08:17let this are
0:08:19be a stable distributions you stable distribution with some
0:08:24this this this this five i'll of uh in this fight
0:08:27this partial and so you don't have the the if you have person
0:08:31uh if you have a a a bunch of them uh
0:08:33the output what also be stable of the same
0:08:37parameter alpha
0:08:38and but of this portion will be and and so like an L P
0:08:44a metric of of the dispersion
0:08:47right so
0:08:48uh for instance if the it if all of these have a a a a uh this person one
0:08:52and you are and you out all of this the N
0:08:54the the output the why
0:08:56which is the sum of all of these will be um
0:08:59a of the E the L A of a more of four
0:09:02of of of the data
0:09:04so that was partial power captures the that the of
0:09:06the other or
0:09:12it is an example what you have a a a cat can data if you do the cal you random
0:09:16the projections in
0:09:17the output will be that cal she with that of the L one this
0:09:22this is a special case of
0:09:24i is stable distribution
0:09:28okay um
0:09:30so when you tack of a stable distribution since are heavy tails
0:09:33uh you can use second or or or second order moments are second or statistic because they're very heavy tails
0:09:39and the means and variances are that where will find so what you have to use
0:09:44is you have to have
0:09:45what you have to use you have to uh use fractional uh mode
0:09:51uh if if uh X is a uh
0:09:53stable distribution
0:09:55then the expected value of X to the P so that X of the P
0:09:58has uh
0:10:01you compute the the
0:10:02the moments
0:10:03then you will this will be related to the dispersion the P
0:10:07and uh therefore if you have
0:10:09the L P more
0:10:11oh of these this is a of the sum of a a lot of these terms
0:10:14then the expected value of that the they'll P then it'll be another constant that term
0:10:19a variable P and alpha
0:10:20in L so the dispersion of the of the very
0:10:24so this can be used in order to do the analysis that we need for the
0:10:28for the uh are i P that would will go to
0:10:32so this is a a that they are we were gonna we gonna look for is uh we're gonna have
0:10:37run the matrix that again but like you generate
0:10:41compressive random the to see that are gaussian that we gonna be generating
0:10:45i got
0:10:46matrices was is that are of the stable
0:10:48i at uh with out between one and two
0:10:53and you will
0:10:54uh a show that if you have a
0:10:56K A the dimension of is matrix B
0:10:58well sir it than the mentioned
0:11:00uh which is as
0:11:02uh a and of over S a very similar to that
0:11:05for a traditional are P where is that the sparsity
0:11:08then this this uh
0:11:10um distance card to station will be preserved
0:11:13and with a given problem
0:11:18a came to do that to prove that we will use this are similar steps as as as we do
0:11:21in the in the uh traditional are P construction
0:11:25except that we will be using these other
0:11:28uh different than for and the uh
0:11:31fractional moment
0:11:32and that the procedure is of course to look at the probability probabilistic approach that for a fixed X
0:11:40in uh that is sparse uh
0:11:42that the rip hold
0:11:44then we will look at the uh
0:11:46that the rip
0:11:50is achieved
0:11:51for any X in a in our and the S and then for any submatrix of B R are so
0:11:56it's very similar to be other
0:11:58or just on the traditional
0:11:59on the traditional um
0:12:01alright are P the relations so we will just get
0:12:03how we do this
0:12:06um so the first lemma tell so that if you have the index of uh
0:12:12S being the sparsity
0:12:14and uh oh we're we're looking at
0:12:17this uh be a submatrix of K by ace
0:12:20uh of these uh projection matrix
0:12:23then uh this will hold this this will whole whole these are are some constants that we will derive
0:12:28in in a wheel
0:12:29with this will hold with a given probability
0:12:33and uh
0:12:34the problem that is is uh are related to the to them up the fractional moments we went uh
0:12:39we just discussed uh
0:12:41a minute ago
0:12:42we you have a i is equal to the projection major
0:12:46each of the interest of these why will be the linear combination of the X with the stable
0:12:53therefore for this uh a random variable Y to inter a projection will be alpha stable with a zero mean
0:12:59but the
0:13:00dispersion of uh
0:13:02yeah uh the the L L
0:13:05right the
0:13:06uh the L P L of the of Y
0:13:09then is
0:13:12we know that by this somewhere where you have each of these is the i is uh
0:13:16is given there
0:13:18so one um
0:13:19this is just a that the sum of this
0:13:22elements of of the why way of the of the people are
0:13:26and that you can then take the expected value of of the projection
0:13:31which um is just uh
0:13:34again it's a this are
0:13:36the P uh
0:13:39the the the out of an a normal of the of of of the back
0:13:46oh so we have a then is is we have the expected value of and then we have the variance
0:13:52in in the variance you can
0:13:53similarly divide there so you have a you have the mean and you have a variance and you trying to
0:13:57get this bound
0:13:58in order to detect the probabilities that
0:14:00that that the distance
0:14:01as will be preserved in that this
0:14:05um um
0:14:07then if you approximate of the distribution of of of these uh P more by an inverse gaussian
0:14:14then you will have a a of by bound that you can then relate how likely use
0:14:19how like there these distances to be in that in that in a given "'em" ball if you will
0:14:25in that will be a function
0:14:26well this parameter a to we'd be parameter at the the distribution of these me are
0:14:30and signal that we just
0:14:32review in the previous slide
0:14:34and the
0:14:35the turn of band provides as that probability um
0:14:39which uh will be a function of the K right
0:14:42that will tell you for a given probably that i i'm within the ball and need so many projections
0:14:47on the uh set
0:14:50but on the compressive sensing measure
0:14:55so a uh
0:14:56we then general that not just for a given X but for any arbitrary X that could but that preserves
0:15:02that uh it'll P norm
0:15:04in that does just changes this probability
0:15:06you just have to be a little more
0:15:08i don't the on the
0:15:11on the distances in then
0:15:13for any submatrix
0:15:14i you just change also the
0:15:16the constants that
0:15:18an N to prove the that the or and then you just have to
0:15:22many P like the constants to put in to this form
0:15:25which is what we you what will do
0:15:28but in in and at the same time that in says the minimum number of measurements
0:15:32to it's to attain the rip which is again uh
0:15:35a function of these S slot and
0:15:40so this is an example what happens if you can if you if you um
0:15:44if you do these projections so you have the the shall reduction you you have this matrices that satisfy this
0:15:50is the this reason
0:15:52uh and then you can do the reconstruction that only just mitchell to but you wanna to the reconstruction
0:15:58but when you do the be a a reconstruction
0:16:01you're project thing with this uh
0:16:03not some gaussian sub gaussian entries
0:16:06in therefore for you can not use traditional compressive sensing algorithms
0:16:10uh like the L two a one or or uh
0:16:14methods that are rely on and to uh
0:16:16distance distances
0:16:19um point of the greedy out within
0:16:21to is you really with ends would fail
0:16:23uh because of the impulsive nature of that the measure
0:16:27so in this uh in this uh example uh oh we're gonna we getting um compressive measurements Y
0:16:33where are are are the alpha stable uh projections
0:16:37without equal to one point two
0:16:39uh well the noise is one
0:16:41what one point two and um
0:16:46then you have this is a density function
0:16:49a number of measurements is four hundred K is the number measurements
0:16:53S is the sparsity
0:16:55this is a uh uh um reconstruction algorithms that
0:16:59are not develop in this paper but in different paper that uses
0:17:02uh not L two
0:17:04data fitting uh
0:17:07uh terms uh but uses a a um
0:17:11uh i L L lorentzian regular say
0:17:14a lorentzian based metrics to do the the
0:17:17the data fit
0:17:19then you can see that the the data that is the original or the circles blue circles
0:17:24bin them cover data is is
0:17:26is well preserved
0:17:28if you the L one a is you will week
0:17:30we curve or we cover the sparsity terms
0:17:33you will have a lot of a ears uh
0:17:35it's star
0:17:38and then you can do that and you can
0:17:40test the
0:17:43the number of uh measurements that the you require and
0:17:46again for these method we have
0:17:48a the medical uh K that you need to to fit the in this construction you you were able to
0:17:54uh do the inversion
0:17:57in a within the bounds of
0:17:59the tip that they L C
0:18:02so in summary O
0:18:04what uh
0:18:05what if you use uh
0:18:06if you don't use of gaussian
0:18:08a measurement
0:18:10uh we we explore that you can get a a a uh this is trees trees or these distances in
0:18:15a in a not an end to but another another other uh
0:18:20norms or
0:18:22and uh uh at the same time you will you will find that the they're more last
0:18:27you have a lot more robust
0:18:28against noise that very close it
0:18:30and uh uh also we use you yeah
0:18:33sparsity inducing a a a more that but you want for
0:19:06well you're
0:19:07what you're but your uh
0:19:09you're using your use in the journal bounds on the L
0:19:13on that uh
0:19:16uh tire
0:19:17so you're doing
0:19:18the close to the
0:19:20so or you can use a
0:19:21approximations matt not couch in that
0:19:24right him where you can put inverse gaussian or other channel out some of that is true
0:19:28a well the a
0:19:34sure because you do your use using those on the on the L P type
0:19:38uh very
0:19:41he is uh
0:19:50yeah yes
0:20:13yeah what happens is a um
0:20:17you generate you generate your your matrices using a stable distribution right so you have
0:20:22some that is not gaussian you generate the projections for you gonna get a vector of measurements
0:20:27but these measurements are are are
0:20:30linear combination alpha stable distributions so they're not
0:20:34uh they're not so gaussian than i gauss
0:20:36so the inverse a algorithms can use traditional uh L two
0:20:42uh data fitting terms
0:20:44with the regular right so you have to use norms that are robust you have to use norms
0:20:49perhaps like the ones they what they were
0:20:50mentioned the
0:20:52later or earlier
0:20:53but are are one or or a more robust and you're of
0:20:56so in this case
0:20:58uh this illustrates that if you if you norms of data fitting that are
0:21:02are there are more robust like the L L the or in that was uh
0:21:18yes yes uh
0:21:19yes that that will be that will double will you will get a good result if you do yes
0:21:24within that the in the algorithm the that we use as a lorentzian type
0:21:28but we could that derive an algorithm that uses and an L
0:21:31a people less than
0:21:45yeah well the L one is the approximation for the for the right
0:21:57i we have really run run that experiment because are to turn right there there's the data fitting "'em" which
0:22:02is the that one that you were used
0:22:04put a zero point six
0:22:06in and there's the sparsity term which
0:22:07could be the L zero
0:22:09even the L one
0:22:10right so if if you if you if you can but without of them that was the and the data
0:22:14fitting normal zero point six and use the L zero we a very good
0:22:18or or L
0:22:19so point six of than then one maybe
0:22:23but he these odd with them the you we try
0:22:25was one that was somewhat to double that that we could go down
0:22:29to that as as was presented with a more in more
0:22:33uh in this here shows you that if you the the L two
0:22:36they are fading
0:22:37you can of the port you gonna get a lot of spurs result because of that
0:22:40because of the structure of the projection
0:22:42which is not
0:22:43some gauss
0:22:53yes that's a good question um
0:22:55uh well we explore where the fun that the goal um
0:22:59characteristics of of such mate
0:23:02uh a to generate them and how do you uh a user in practise that's that's something to be explored
0:23:07uh a sort you you could try
0:23:11for a gaussian mixtures are easy
0:23:13a very good approximation of of not sub gaussian uh
0:23:17majors but in general to be in practice you have to to come up that's an open open question
0:23:25okay thank