0:00:16my
0:00:17oh
0:00:29oh
0:00:29close one can everyone hear me
0:00:32okay uh
0:00:33good to all if you my name is uh probably in
0:00:36and from university of michigan
0:00:38a this is work is with mad wise set and then killed but and also joint work with profits a
0:00:43mike flynn and and to and shows that that are
0:00:46and as a that such just some gonna talk about a a little or compressive sampling at C
0:00:51that we called as in and P P M
0:00:53so before explain what random B P a is i need to explain uh what the P P M stands
0:00:58for
0:00:58so it stands for a pulse position modulation and and as you can probably and the stand from the name
0:01:03that information that they basically
0:01:06is present in the position of the pulses that it but uses
0:01:09uh
0:01:10for going to the D is i have to mention that the P P M at C falls and to
0:01:14the general category of a C is that
0:01:16oh less time encoders
0:01:18that is because they can what that would that information into time delays and then digitise their time information using
0:01:24efficient
0:01:25D C time to digital can like this
0:01:28the advantage of that is that it low was that out of the C because you can now replace all
0:01:32the and the looks at you
0:01:34in the classical it C with the uh digital parts
0:01:37and that reduces the power
0:01:39and also uh
0:01:41but the current scaling down trends in the is but the chip at a are going down and the gate
0:01:45delay so going down and
0:01:47the power supply what is are going down it is much easier to get a finer time resolution
0:01:51then um is a leash and so making use of all these uh uh selecting all this five
0:01:56uh this P P M at C was uh proposed a a in those and nine by not uh eight
0:02:01et cetera
0:02:02and uh well not going into the details the at sequence consists the P P made is it consists of
0:02:06uh friends ramp or signal
0:02:09the am signal is P R T and uh the input signal is compared with this different than friends signal
0:02:14you continuously
0:02:16and the points where a the signal uh the um
0:02:19and just say
0:02:20a those are the points which are recorded
0:02:22and uh the because the starting point of the rams are known
0:02:26and the slope of the times so known
0:02:28uh the delta that one and L delta out to the
0:02:31position of the pulses
0:02:32a gives you the information about the whole data show of the signal at that point as well
0:02:37so effectively that what they T C at it's is uh
0:02:40nonuniform signal dependent kind of sampling
0:02:44and uh classically be uh because of uh a non-uniform it sure you can note the the roots difficult to
0:02:49use linear reconstruction so
0:02:51a that a uh some classical classically some reconstruction techniques for use not only trick get things
0:02:56oh we are going to be did this one to mention that those algorithms need that the
0:03:00the signal to be sampled at about board uh one point two times the nyquist rate
0:03:04and below the nyquist rate those on returns i direct
0:03:07so our goal here is to take this P P a at C and can what in didn't do or
0:03:11compressive sampling at C
0:03:12but which we mean that we need a uh sampler in the at see that of the signal X sub
0:03:16nyquist rate
0:03:18in the time domain and then of we also on construction reconstruction inside at see that the
0:03:23will be fast enough and accurate
0:03:25i need to reconstruct the signal and the frequency domain of course to use that compressive sensing design
0:03:30we need to assume that does signal is sparse
0:03:33uh
0:03:34so i in but because you made a sparse in the frequency domain and it is as sparse which means
0:03:38it has only S dominant frequencies
0:03:40so if you saw the coefficients uh i'm bit you'd it has to dig get faster and they should be
0:03:44only has uh dominant one
0:03:47and of course a straightforward way to can what the P P M midi seen to uh
0:03:51compressive sampling it is is to just a at the
0:03:54uh sub nyquist rate and then of course uh use uh the new reconstruction techniques
0:03:59we use uh a a a a a a a matching pose a kind of reconstruction technique
0:04:04um
0:04:05after that one way
0:04:07and as we will see that that is much inferior to the and the P P M design what it
0:04:11what the random and that is just introduced random it's into a system appropriately
0:04:15uh specifically we can make this starting points of the rams random
0:04:20and uh
0:04:21we have tried different a random distributions but the uniform distribution into a uh to the best
0:04:26so the simulations of the presentations is for that
0:04:30and uh uh again on so the
0:04:32sampling is no random um non-uniform
0:04:35and it but results all the proper is and that by is of the P M design
0:04:39and we used the uh
0:04:41uh uh the use to different algorithms to reconstruct the signal both of them are
0:04:45uh three matching pose a kind of all returns but does since i have time a little time i
0:04:50this "'cause" only the first target it
0:04:53uh so uh
0:04:55warming the measurement matrix of a system before a going to that i just one explain a little but of
0:05:00how would the measurement matrix is going to be different
0:05:02from the a classical you compress sensing matrix
0:05:05so usually in the compressed sensing if we assume there are uh F is you input signal and hand why
0:05:10are you measurements
0:05:11we take random linear measurements and from this matrix five
0:05:14and uh of of if as you assume if you assume that the uh the in a measure and you
0:05:19the measurements and nothing but the random on grid sampling
0:05:21and the matrix for is just a collection of random rows from the identity matrix
0:05:26and if S is sparse in the frequency domain
0:05:29where site use a dft matrix and X is the representation of S in the frequency domain and we assume
0:05:34that X is sparse
0:05:35i want to reconstruct a
0:05:37so an night measurement matrix is just a multiplication of sight
0:05:41by time side
0:05:42which would be a a a sub separate dft matrix
0:05:44but in the case of P P em five is not a make it's like this but it's an interpolation
0:05:48matrix now
0:05:49so be is not going to be a separate it uh idea of team
0:05:53so if you wanna look at what be is
0:05:55uh we go to that the and then step it that wasn't the labs iteration
0:05:59so if you as you that the signal has only one and C S not
0:06:03uh that is the signal is equal to to the G to by not be and if you if it's
0:06:07given that there the signal is sampled at time points to you want to take K
0:06:11then we know that the measurements are have to look like that
0:06:14so uh and my uh K is the number of measurements that that a C takes
0:06:19and N is the number of measurements if we sampled at nyquist rates of course we want
0:06:24keep K much fess than and
0:06:25so much a matrix is a cake cross in matrix
0:06:29and using this observation be
0:06:31uh if you are looking for them
0:06:33at the frequency F and then they're looking for measurements in this manner
0:06:38and if you're looking for assume we're looking for measurement in this manner and so on
0:06:41we can fill up the damage of a matrix and normalize it appropriately
0:06:45and uh
0:06:46point to note here is that the measurement matrix is a random and the randomness comes from this points uh
0:06:51time points do you want to get the signal was sampled
0:06:55and uh these since these points are uh you non-uniform and they don't lie on any nyquist grid
0:07:01uh because of that the matrix B uh i does not have to necessarily satisfy any a typical
0:07:07so we don't check for a right B and rather we
0:07:10uh make do with something much weaker
0:07:12we look at the correlations relations between the different columns and they'll expose bounds on them
0:07:17and use those bonds to for their give a at a and is what or you got it
0:07:22so uh uh and uh
0:07:24so reconstruction algorithm
0:07:26uh it's so it's just
0:07:27so uh is similar to a uh any matching and really was it only them and stuff going into the
0:07:32details of this one mention that
0:07:34it has to blocks the frequency identification block and the coefficient estimation collection
0:07:38and the most intense step in this plot is that least squares
0:07:41which we to do with the digits it's next iteration
0:07:44and uh the the coefficient estimations step is present inside the iterations
0:07:49and the most intensive part here is the might it an of uh be transpose times a
0:07:54the residual
0:07:55and uh
0:07:56because of the special structure of a matrix B we can formulate the be transposed are
0:08:01uh
0:08:02uh inverse and you have fifty
0:08:04and we can use some existing algorithms for attending this item a hundred of um
0:08:10out of N log N
0:08:11and so if you number of iterations but um is i
0:08:14the uh average run-time is order of i and log in
0:08:20so do not just to have it be for a look at why and how the algorithm works
0:08:25the be if you look at this to be transposed are
0:08:28initially i is nothing but they that measurement Y
0:08:31so that is nothing but be transposed be times et
0:08:35and we can prove that if you have a and F number of measurements K when K is big enough
0:08:38for as is the sparsity of the signal
0:08:41it's lance is some constant
0:08:42we can show that the a diagonal elements of for this guy matrix is uh are a small enough
0:08:48and they can so the prove that
0:08:50the estimate does that we get
0:08:52their expected values uh quite close to the original value you don't that variance is bounded by the
0:08:58a energy of the signal and
0:09:00a brief sketch of the proof
0:09:02uh uh prove has more of a a kind of leading to it
0:09:05uh so we can for the prove that Y is uh if like
0:09:09the be it's is i U but as i sit it or
0:09:11and when gaze big enough we can prove that would probably be one minus or of that's learn squared
0:09:16this kind of a and T
0:09:18a X S on the uh at is is the
0:09:21uh best test "'em" presentation of a signal X and the uh a constant at
0:09:26for is kind of a signal dependent constant because
0:09:29uh a it is a constant that separates the dominant frequency components from the non dominant ones for example in
0:09:35this figure
0:09:36uh they are five frequency components but we are in to sort only the three dominant once
0:09:40and this as five kind of a threshold which shows separates is uh a company and stuff that is there
0:09:45so which goes into this that's pressure
0:09:48so uh i a set four
0:09:50in the first iteration uh that estimate as
0:09:53i quite close to the actual value but the variance is quite high because the a signal is left
0:09:58i to be estimated
0:09:59and that is indicated but this long that and so here
0:10:02and they can prove that the probability of one man sort of epsilon square at good fraction of the use
0:10:07of a a if i correctly
0:10:09a ones those that i in fact is to make at the
0:10:12so their contribution can be subtracted from the signal and you that's it you can be up to
0:10:17and
0:10:18so because a good fraction of that at fight in the first iteration
0:10:21the variance not drops down because uh
0:10:24the amount of energy in the signal also goes down
0:10:27and we can prove that with a similar probability get good fraction of those uh is which switch not identified
0:10:33properly in the first tradition
0:10:35but lower be identified T
0:10:37so the net number of frequencies are i it got goes up
0:10:41and
0:10:41so that the the result of that is that the variance skip going down and down from iteration to iteration
0:10:46eventually
0:10:47uh after sufficient number of iterations the at is small enough that
0:10:51all the estimate this can be identified correctly
0:10:54and then estimate only if we could just can be identified T and then estimate
0:10:59oh are going to uh some
0:11:01sort of a target it them
0:11:03so we have a series of uh results that support the at
0:11:06and uh the first one is way really construct might be don't signals with the on with them one that
0:11:11just discussed
0:11:12and that might be don't signals that just mean on signals with the
0:11:16oh a linear combination of sinusoids and the sign so it's have random phases and frequencies
0:11:22and they have a a you
0:11:25and uh of this kind of signal is that take in and then we add additional white gaussian noise to
0:11:30it
0:11:31and reconstructed it the different input as a non levels the signal is sampled with both the regular P P
0:11:36and design it is the with the
0:11:38no than a P P M
0:11:40and also with their and then P M and then reconstruct
0:11:43the line line points to the regular P M and the blankets language to the and them one and the
0:11:47black line response to over
0:11:49benchmark that we colours as the estimate quest
0:11:52which is nothing but the input signal sampled at the nyquist rate
0:11:55the same quantization level as the as that of these at he's and then truncated to people only the estimate
0:12:00in the frequent
0:12:01i min
0:12:01so that truncation actually improves the performance of the benchmarks so this is uh
0:12:06and good benchmark
0:12:07and uh uh uh uh a and at the point to note here is even when the input as and
0:12:12i mean noise is like gaussian noise is added
0:12:14because of the fine at the time resolution as and team at C there is some more
0:12:19uh a quantization it that already present a signal
0:12:22and a looking at the results as we can see the adding random to the system definitely improves the performance
0:12:28of the at C
0:12:29is is uh a a a that and P P and performs much better than this one at all snr
0:12:33levels and
0:12:34is separating the benchmark a much closer to it
0:12:37that is going to the back correlation properties of the as measure
0:12:40and then that point to what is that as we increase the number of tones in the signal at is
0:12:44we make the signal less sparse
0:12:45the
0:12:46the a good but is a lot of degradation in the performance of the constant be but the regular P
0:12:50M bit as and the and a P P is
0:12:53the could be an affected compared to the benchmark
0:12:55so i same number of measurements the than the B M is or
0:12:59we can skate less but signals much better than the regular P B
0:13:02in a second experiment is also just a proof of concept it's movement meant where with a steak a simple
0:13:07one don't signal and then we construct a reading number of uh measurements and rating the input as an a
0:13:14and i'll explain the poor out and the tight first but what as an hour is they X axis on
0:13:18the Y y-axis is given by the percent at sampling needed for sets as
0:13:21but it posted at sampling it simply this uh issue okay oh what and
0:13:25and uh sets as is some criteria we define as an example for this to the
0:13:29a particular experiment
0:13:31and was it down sampling the it was as is the least uh as the used number of samples
0:13:35that you need to succeed uh using this terms
0:13:39so can as you can see in that class
0:13:41but i and a P B M needs a much like a much less measurements just that C
0:13:46and these and and an increases this
0:13:48quickly dropped down to about three percent and stays about the same and the gap but also increases as as
0:13:52and not increases
0:13:54and the have on the left is the case when there is no arts about motion was and only the
0:13:57measurement noise
0:13:58even in this case the
0:14:00the regular P P i'm kind of big it's
0:14:02once you go as than twenty for about twenty percent of the measurements
0:14:06but as the
0:14:07the you the trying to be B can go as low as the three or two
0:14:12uh you know next experiment i would this one a mentioned that uh
0:14:15uh a of via a have are dealt with the on with frequencies in these two experiments in this that's
0:14:20them and read look at an off peak frequency and how they with them forms
0:14:24oh a frequency i mean the frequency which is lies
0:14:27all the nyquist read that we searching on
0:14:29and we know that that "'cause" this the spectral leakage and adversely affects effects a sparsity
0:14:34and V uh it prior to encounter that using the hamming window approach we multiply the signal with a hamming
0:14:39window uh or the sampling so the sampling process is an effective
0:14:42and since the having window is uh a reversible
0:14:45it's nonzero at all times so we can we can be words it after the reconstruction
0:14:49uh i as you can see uh the performance with the many are definitely improves the performance of the system
0:14:55are close close it to the benchmark
0:14:57i i i as an at low snr reversing the hamming window to the noise so it to doesn't work
0:15:02well
0:15:02and have an plot at the regular people "'em" here because the old them doesn't come at all
0:15:07uh in our next experiment we look at uh something of a bad because signal at is an fm signal
0:15:13and we amplitude thirty three percent the nyquist rate and we have a similar results that the hamming window the
0:15:18performance of improves
0:15:20a log
0:15:20and same for the him signal
0:15:22and uh going into the all with them do i just one mention here that the
0:15:28we not get them to you "'em" some action
0:15:30conditions on the signal and because of that very able to reduce the number of iterations from i to just
0:15:35one
0:15:36and so it is computationally very less it's than a read them one
0:15:40and uh uh its performance is the compare will go on with them one at high snrs
0:15:44i it low an it actually does for much better than of them one
0:15:47so if you know that the if you know the actually conditions on the input signal that side place and
0:15:52then
0:15:53or also if there is an is a little all with them to was back
0:15:57and uh we have a a similar results for the uh practical signals the and it from signals
0:16:02and for the for the T that the all with them to i would ask you to refer to the
0:16:05people
0:16:06and uh
0:16:07so i in confusion so we have a
0:16:10compressive sampling a a them
0:16:12P P M at C
0:16:13uh that
0:16:14that keeps all the advantages of the regular B "'em" at C and also takes all the methods of the
0:16:20compressive a sampling technique
0:16:21since that's they is that mike which it can handle uh
0:16:25and signals that the less parts
0:16:26and signals knows with of at frequencies and
0:16:29the reconstruction algorithms are are simple in and can be made simpler
0:16:33uh for practical hardware in
0:16:35should
0:16:36uh
0:16:36so that they can my presentation thank i attention
0:16:55i i mean just at then you slightly showed uh of core frequency estimation
0:17:05so you all the significant eating image always a sick the signal last a which to performance right
0:17:11um
0:17:12would you consider oh have you tried using uh uh
0:17:15a fine else
0:17:17speaker the fact um frequency grid for construction
0:17:21a a frequency they definitely improves performance the again it increases the competition
0:17:26and uh
0:17:27the you want to implement it can hard the we want to keep the that simple last
0:17:31so instead we trying to use a hamming window approach
0:17:34the jokes
0:17:35pretty good yeah
0:17:36thank you
0:17:39uh
0:17:43i was wondering why do you choose a pulse position modulation as to a D conversion technique can have you
0:17:48compared to this
0:17:50like a at most that's
0:17:52yeah exploit sparsity are quite a few of them and so on
0:17:55yeah so in uh be five to look at a few uh are uh a two D time encoder techniques
0:18:01but most of them
0:18:02uh they this one a continuous time T S P proposed for prove process to with this and it kind
0:18:07of uh
0:18:08it has a higher power than the um
0:18:10P M I D C and it works in the analog domain
0:18:13and can it to based on the main leaves no advantage that's fifty
0:18:17i and they uh even a finite rate of innovation
0:18:20kind of a little bit unstable stable than our method i
0:18:23and uh
0:18:24uh
0:18:26kind of also needs close to a nyquist rate sampling and that's
0:18:29sub nyquist sampling
0:18:30that's to a major difference that done none of the other matt that seem to a query well it's sub
0:18:35make the something so we look at the
0:18:37P P at C
0:18:41is there some deeper explanation why these are their methods that's fail and and this one is very suitable for
0:18:47the
0:18:48or location in combination with sparse
0:18:50signal process
0:18:52uh i
0:18:55a i haven't really talked much log that
0:18:57but i think that uh
0:18:59signal lip and then make chair maybe
0:19:02oh
0:19:03i the signal dependent sampling each of its planes
0:19:06i
0:19:06like this five
0:19:08this but that might work
0:19:09okay
0:19:10that
0:19:11i don't at
0:19:16we may have for a time for one more question
0:19:21hmmm
0:19:24mouth full
0:19:25one for two