0:00:16 | my |
---|---|

0:00:17 | oh |

0:00:29 | oh |

0:00:29 | close one can everyone hear me |

0:00:32 | okay uh |

0:00:33 | good to all if you my name is uh probably in |

0:00:36 | and from university of michigan |

0:00:38 | a this is work is with mad wise set and then killed but and also joint work with profits a |

0:00:43 | mike flynn and and to and shows that that are |

0:00:46 | and as a that such just some gonna talk about a a little or compressive sampling at C |

0:00:51 | that we called as in and P P M |

0:00:53 | so before explain what random B P a is i need to explain uh what the P P M stands |

0:00:58 | for |

0:00:58 | so it stands for a pulse position modulation and and as you can probably and the stand from the name |

0:01:03 | that information that they basically |

0:01:06 | is present in the position of the pulses that it but uses |

0:01:09 | uh |

0:01:10 | for going to the D is i have to mention that the P P M at C falls and to |

0:01:14 | the general category of a C is that |

0:01:16 | oh less time encoders |

0:01:18 | that is because they can what that would that information into time delays and then digitise their time information using |

0:01:24 | efficient |

0:01:25 | D C time to digital can like this |

0:01:28 | the advantage of that is that it low was that out of the C because you can now replace all |

0:01:32 | the and the looks at you |

0:01:34 | in the classical it C with the uh digital parts |

0:01:37 | and that reduces the power |

0:01:39 | and also uh |

0:01:41 | but the current scaling down trends in the is but the chip at a are going down and the gate |

0:01:45 | delay so going down and |

0:01:47 | the power supply what is are going down it is much easier to get a finer time resolution |

0:01:51 | then um is a leash and so making use of all these uh uh selecting all this five |

0:01:56 | uh this P P M at C was uh proposed a a in those and nine by not uh eight |

0:02:01 | et cetera |

0:02:02 | and uh well not going into the details the at sequence consists the P P made is it consists of |

0:02:06 | uh friends ramp or signal |

0:02:09 | the am signal is P R T and uh the input signal is compared with this different than friends signal |

0:02:14 | you continuously |

0:02:16 | and the points where a the signal uh the um |

0:02:19 | and just say |

0:02:20 | a those are the points which are recorded |

0:02:22 | and uh the because the starting point of the rams are known |

0:02:26 | and the slope of the times so known |

0:02:28 | uh the delta that one and L delta out to the |

0:02:31 | position of the pulses |

0:02:32 | a gives you the information about the whole data show of the signal at that point as well |

0:02:37 | so effectively that what they T C at it's is uh |

0:02:40 | nonuniform signal dependent kind of sampling |

0:02:44 | and uh classically be uh because of uh a non-uniform it sure you can note the the roots difficult to |

0:02:49 | use linear reconstruction so |

0:02:51 | a that a uh some classical classically some reconstruction techniques for use not only trick get things |

0:02:56 | oh we are going to be did this one to mention that those algorithms need that the |

0:03:00 | the signal to be sampled at about board uh one point two times the nyquist rate |

0:03:04 | and below the nyquist rate those on returns i direct |

0:03:07 | so our goal here is to take this P P a at C and can what in didn't do or |

0:03:11 | compressive sampling at C |

0:03:12 | but which we mean that we need a uh sampler in the at see that of the signal X sub |

0:03:16 | nyquist rate |

0:03:18 | in the time domain and then of we also on construction reconstruction inside at see that the |

0:03:23 | will be fast enough and accurate |

0:03:25 | i need to reconstruct the signal and the frequency domain of course to use that compressive sensing design |

0:03:30 | we need to assume that does signal is sparse |

0:03:33 | uh |

0:03:34 | so i in but because you made a sparse in the frequency domain and it is as sparse which means |

0:03:38 | it has only S dominant frequencies |

0:03:40 | so if you saw the coefficients uh i'm bit you'd it has to dig get faster and they should be |

0:03:44 | only has uh dominant one |

0:03:47 | and of course a straightforward way to can what the P P M midi seen to uh |

0:03:51 | compressive sampling it is is to just a at the |

0:03:54 | uh sub nyquist rate and then of course uh use uh the new reconstruction techniques |

0:03:59 | we use uh a a a a a a a matching pose a kind of reconstruction technique |

0:04:04 | um |

0:04:05 | after that one way |

0:04:07 | and as we will see that that is much inferior to the and the P P M design what it |

0:04:11 | what the random and that is just introduced random it's into a system appropriately |

0:04:15 | uh specifically we can make this starting points of the rams random |

0:04:20 | and uh |

0:04:21 | we have tried different a random distributions but the uniform distribution into a uh to the best |

0:04:26 | so the simulations of the presentations is for that |

0:04:30 | and uh uh again on so the |

0:04:32 | sampling is no random um non-uniform |

0:04:35 | and it but results all the proper is and that by is of the P M design |

0:04:39 | and we used the uh |

0:04:41 | uh uh the use to different algorithms to reconstruct the signal both of them are |

0:04:45 | uh three matching pose a kind of all returns but does since i have time a little time i |

0:04:50 | this "'cause" only the first target it |

0:04:53 | uh so uh |

0:04:55 | warming the measurement matrix of a system before a going to that i just one explain a little but of |

0:05:00 | how would the measurement matrix is going to be different |

0:05:02 | from the a classical you compress sensing matrix |

0:05:05 | so usually in the compressed sensing if we assume there are uh F is you input signal and hand why |

0:05:10 | are you measurements |

0:05:11 | we take random linear measurements and from this matrix five |

0:05:14 | and uh of of if as you assume if you assume that the uh the in a measure and you |

0:05:19 | the measurements and nothing but the random on grid sampling |

0:05:21 | and the matrix for is just a collection of random rows from the identity matrix |

0:05:26 | and if S is sparse in the frequency domain |

0:05:29 | where site use a dft matrix and X is the representation of S in the frequency domain and we assume |

0:05:34 | that X is sparse |

0:05:35 | i want to reconstruct a |

0:05:37 | so an night measurement matrix is just a multiplication of sight |

0:05:41 | by time side |

0:05:42 | which would be a a a sub separate dft matrix |

0:05:44 | but in the case of P P em five is not a make it's like this but it's an interpolation |

0:05:48 | matrix now |

0:05:49 | so be is not going to be a separate it uh idea of team |

0:05:53 | so if you wanna look at what be is |

0:05:55 | uh we go to that the and then step it that wasn't the labs iteration |

0:05:59 | so if you as you that the signal has only one and C S not |

0:06:03 | uh that is the signal is equal to to the G to by not be and if you if it's |

0:06:07 | given that there the signal is sampled at time points to you want to take K |

0:06:11 | then we know that the measurements are have to look like that |

0:06:14 | so uh and my uh K is the number of measurements that that a C takes |

0:06:19 | and N is the number of measurements if we sampled at nyquist rates of course we want |

0:06:24 | keep K much fess than and |

0:06:25 | so much a matrix is a cake cross in matrix |

0:06:29 | and using this observation be |

0:06:31 | uh if you are looking for them |

0:06:33 | at the frequency F and then they're looking for measurements in this manner |

0:06:38 | and if you're looking for assume we're looking for measurement in this manner and so on |

0:06:41 | we can fill up the damage of a matrix and normalize it appropriately |

0:06:45 | and uh |

0:06:46 | point to note here is that the measurement matrix is a random and the randomness comes from this points uh |

0:06:51 | time points do you want to get the signal was sampled |

0:06:55 | and uh these since these points are uh you non-uniform and they don't lie on any nyquist grid |

0:07:01 | uh because of that the matrix B uh i does not have to necessarily satisfy any a typical |

0:07:07 | so we don't check for a right B and rather we |

0:07:10 | uh make do with something much weaker |

0:07:12 | we look at the correlations relations between the different columns and they'll expose bounds on them |

0:07:17 | and use those bonds to for their give a at a and is what or you got it |

0:07:22 | so uh uh and uh |

0:07:24 | so reconstruction algorithm |

0:07:26 | uh it's so it's just |

0:07:27 | so uh is similar to a uh any matching and really was it only them and stuff going into the |

0:07:32 | details of this one mention that |

0:07:34 | it has to blocks the frequency identification block and the coefficient estimation collection |

0:07:38 | and the most intense step in this plot is that least squares |

0:07:41 | which we to do with the digits it's next iteration |

0:07:44 | and uh the the coefficient estimations step is present inside the iterations |

0:07:49 | and the most intensive part here is the might it an of uh be transpose times a |

0:07:54 | the residual |

0:07:55 | and uh |

0:07:56 | because of the special structure of a matrix B we can formulate the be transposed are |

0:08:01 | uh |

0:08:02 | uh inverse and you have fifty |

0:08:04 | and we can use some existing algorithms for attending this item a hundred of um |

0:08:10 | out of N log N |

0:08:11 | and so if you number of iterations but um is i |

0:08:14 | the uh average run-time is order of i and log in |

0:08:20 | so do not just to have it be for a look at why and how the algorithm works |

0:08:25 | the be if you look at this to be transposed are |

0:08:28 | initially i is nothing but they that measurement Y |

0:08:31 | so that is nothing but be transposed be times et |

0:08:35 | and we can prove that if you have a and F number of measurements K when K is big enough |

0:08:38 | for as is the sparsity of the signal |

0:08:41 | it's lance is some constant |

0:08:42 | we can show that the a diagonal elements of for this guy matrix is uh are a small enough |

0:08:48 | and they can so the prove that |

0:08:50 | the estimate does that we get |

0:08:52 | their expected values uh quite close to the original value you don't that variance is bounded by the |

0:08:58 | a energy of the signal and |

0:09:00 | a brief sketch of the proof |

0:09:02 | uh uh prove has more of a a kind of leading to it |

0:09:05 | uh so we can for the prove that Y is uh if like |

0:09:09 | the be it's is i U but as i sit it or |

0:09:11 | and when gaze big enough we can prove that would probably be one minus or of that's learn squared |

0:09:16 | this kind of a and T |

0:09:18 | a X S on the uh at is is the |

0:09:21 | uh best test "'em" presentation of a signal X and the uh a constant at |

0:09:26 | for is kind of a signal dependent constant because |

0:09:29 | uh a it is a constant that separates the dominant frequency components from the non dominant ones for example in |

0:09:35 | this figure |

0:09:36 | uh they are five frequency components but we are in to sort only the three dominant once |

0:09:40 | and this as five kind of a threshold which shows separates is uh a company and stuff that is there |

0:09:45 | so which goes into this that's pressure |

0:09:48 | so uh i a set four |

0:09:50 | in the first iteration uh that estimate as |

0:09:53 | i quite close to the actual value but the variance is quite high because the a signal is left |

0:09:58 | i to be estimated |

0:09:59 | and that is indicated but this long that and so here |

0:10:02 | and they can prove that the probability of one man sort of epsilon square at good fraction of the use |

0:10:07 | of a a if i correctly |

0:10:09 | a ones those that i in fact is to make at the |

0:10:12 | so their contribution can be subtracted from the signal and you that's it you can be up to |

0:10:17 | and |

0:10:18 | so because a good fraction of that at fight in the first iteration |

0:10:21 | the variance not drops down because uh |

0:10:24 | the amount of energy in the signal also goes down |

0:10:27 | and we can prove that with a similar probability get good fraction of those uh is which switch not identified |

0:10:33 | properly in the first tradition |

0:10:35 | but lower be identified T |

0:10:37 | so the net number of frequencies are i it got goes up |

0:10:41 | and |

0:10:41 | so that the the result of that is that the variance skip going down and down from iteration to iteration |

0:10:46 | eventually |

0:10:47 | uh after sufficient number of iterations the at is small enough that |

0:10:51 | all the estimate this can be identified correctly |

0:10:54 | and then estimate only if we could just can be identified T and then estimate |

0:10:59 | oh are going to uh some |

0:11:01 | sort of a target it them |

0:11:03 | so we have a series of uh results that support the at |

0:11:06 | and uh the first one is way really construct might be don't signals with the on with them one that |

0:11:11 | just discussed |

0:11:12 | and that might be don't signals that just mean on signals with the |

0:11:16 | oh a linear combination of sinusoids and the sign so it's have random phases and frequencies |

0:11:22 | and they have a a you |

0:11:25 | and uh of this kind of signal is that take in and then we add additional white gaussian noise to |

0:11:30 | it |

0:11:31 | and reconstructed it the different input as a non levels the signal is sampled with both the regular P P |

0:11:36 | and design it is the with the |

0:11:38 | no than a P P M |

0:11:40 | and also with their and then P M and then reconstruct |

0:11:43 | the line line points to the regular P M and the blankets language to the and them one and the |

0:11:47 | black line response to over |

0:11:49 | benchmark that we colours as the estimate quest |

0:11:52 | which is nothing but the input signal sampled at the nyquist rate |

0:11:55 | the same quantization level as the as that of these at he's and then truncated to people only the estimate |

0:12:00 | in the frequent |

0:12:01 | i min |

0:12:01 | so that truncation actually improves the performance of the benchmarks so this is uh |

0:12:06 | and good benchmark |

0:12:07 | and uh uh uh uh a and at the point to note here is even when the input as and |

0:12:12 | i mean noise is like gaussian noise is added |

0:12:14 | because of the fine at the time resolution as and team at C there is some more |

0:12:19 | uh a quantization it that already present a signal |

0:12:22 | and a looking at the results as we can see the adding random to the system definitely improves the performance |

0:12:28 | of the at C |

0:12:29 | is is uh a a a that and P P and performs much better than this one at all snr |

0:12:33 | levels and |

0:12:34 | is separating the benchmark a much closer to it |

0:12:37 | that is going to the back correlation properties of the as measure |

0:12:40 | and then that point to what is that as we increase the number of tones in the signal at is |

0:12:44 | we make the signal less sparse |

0:12:45 | the |

0:12:46 | the a good but is a lot of degradation in the performance of the constant be but the regular P |

0:12:50 | M bit as and the and a P P is |

0:12:53 | the could be an affected compared to the benchmark |

0:12:55 | so i same number of measurements the than the B M is or |

0:12:59 | we can skate less but signals much better than the regular P B |

0:13:02 | in a second experiment is also just a proof of concept it's movement meant where with a steak a simple |

0:13:07 | one don't signal and then we construct a reading number of uh measurements and rating the input as an a |

0:13:14 | and i'll explain the poor out and the tight first but what as an hour is they X axis on |

0:13:18 | the Y y-axis is given by the percent at sampling needed for sets as |

0:13:21 | but it posted at sampling it simply this uh issue okay oh what and |

0:13:25 | and uh sets as is some criteria we define as an example for this to the |

0:13:29 | a particular experiment |

0:13:31 | and was it down sampling the it was as is the least uh as the used number of samples |

0:13:35 | that you need to succeed uh using this terms |

0:13:39 | so can as you can see in that class |

0:13:41 | but i and a P B M needs a much like a much less measurements just that C |

0:13:46 | and these and and an increases this |

0:13:48 | quickly dropped down to about three percent and stays about the same and the gap but also increases as as |

0:13:52 | and not increases |

0:13:54 | and the have on the left is the case when there is no arts about motion was and only the |

0:13:57 | measurement noise |

0:13:58 | even in this case the |

0:14:00 | the regular P P i'm kind of big it's |

0:14:02 | once you go as than twenty for about twenty percent of the measurements |

0:14:06 | but as the |

0:14:07 | the you the trying to be B can go as low as the three or two |

0:14:12 | uh you know next experiment i would this one a mentioned that uh |

0:14:15 | uh a of via a have are dealt with the on with frequencies in these two experiments in this that's |

0:14:20 | them and read look at an off peak frequency and how they with them forms |

0:14:24 | oh a frequency i mean the frequency which is lies |

0:14:27 | all the nyquist read that we searching on |

0:14:29 | and we know that that "'cause" this the spectral leakage and adversely affects effects a sparsity |

0:14:34 | and V uh it prior to encounter that using the hamming window approach we multiply the signal with a hamming |

0:14:39 | window uh or the sampling so the sampling process is an effective |

0:14:42 | and since the having window is uh a reversible |

0:14:45 | it's nonzero at all times so we can we can be words it after the reconstruction |

0:14:49 | uh i as you can see uh the performance with the many are definitely improves the performance of the system |

0:14:55 | are close close it to the benchmark |

0:14:57 | i i i as an at low snr reversing the hamming window to the noise so it to doesn't work |

0:15:02 | well |

0:15:02 | and have an plot at the regular people "'em" here because the old them doesn't come at all |

0:15:07 | uh in our next experiment we look at uh something of a bad because signal at is an fm signal |

0:15:13 | and we amplitude thirty three percent the nyquist rate and we have a similar results that the hamming window the |

0:15:18 | performance of improves |

0:15:20 | a log |

0:15:20 | and same for the him signal |

0:15:22 | and uh going into the all with them do i just one mention here that the |

0:15:28 | we not get them to you "'em" some action |

0:15:30 | conditions on the signal and because of that very able to reduce the number of iterations from i to just |

0:15:35 | one |

0:15:36 | and so it is computationally very less it's than a read them one |

0:15:40 | and uh uh its performance is the compare will go on with them one at high snrs |

0:15:44 | i it low an it actually does for much better than of them one |

0:15:47 | so if you know that the if you know the actually conditions on the input signal that side place and |

0:15:52 | then |

0:15:53 | or also if there is an is a little all with them to was back |

0:15:57 | and uh we have a a similar results for the uh practical signals the and it from signals |

0:16:02 | and for the for the T that the all with them to i would ask you to refer to the |

0:16:05 | people |

0:16:06 | and uh |

0:16:07 | so i in confusion so we have a |

0:16:10 | compressive sampling a a them |

0:16:12 | P P M at C |

0:16:13 | uh that |

0:16:14 | that keeps all the advantages of the regular B "'em" at C and also takes all the methods of the |

0:16:20 | compressive a sampling technique |

0:16:21 | since that's they is that mike which it can handle uh |

0:16:25 | and signals that the less parts |

0:16:26 | and signals knows with of at frequencies and |

0:16:29 | the reconstruction algorithms are are simple in and can be made simpler |

0:16:33 | uh for practical hardware in |

0:16:35 | should |

0:16:36 | uh |

0:16:36 | so that they can my presentation thank i attention |

0:16:55 | i i mean just at then you slightly showed uh of core frequency estimation |

0:17:05 | so you all the significant eating image always a sick the signal last a which to performance right |

0:17:11 | um |

0:17:12 | would you consider oh have you tried using uh uh |

0:17:15 | a fine else |

0:17:17 | speaker the fact um frequency grid for construction |

0:17:21 | a a frequency they definitely improves performance the again it increases the competition |

0:17:26 | and uh |

0:17:27 | the you want to implement it can hard the we want to keep the that simple last |

0:17:31 | so instead we trying to use a hamming window approach |

0:17:34 | the jokes |

0:17:35 | pretty good yeah |

0:17:36 | thank you |

0:17:39 | uh |

0:17:43 | i was wondering why do you choose a pulse position modulation as to a D conversion technique can have you |

0:17:48 | compared to this |

0:17:50 | like a at most that's |

0:17:52 | yeah exploit sparsity are quite a few of them and so on |

0:17:55 | yeah so in uh be five to look at a few uh are uh a two D time encoder techniques |

0:18:01 | but most of them |

0:18:02 | uh they this one a continuous time T S P proposed for prove process to with this and it kind |

0:18:07 | of uh |

0:18:08 | it has a higher power than the um |

0:18:10 | P M I D C and it works in the analog domain |

0:18:13 | and can it to based on the main leaves no advantage that's fifty |

0:18:17 | i and they uh even a finite rate of innovation |

0:18:20 | kind of a little bit unstable stable than our method i |

0:18:23 | and uh |

0:18:24 | uh |

0:18:26 | kind of also needs close to a nyquist rate sampling and that's |

0:18:29 | sub nyquist sampling |

0:18:30 | that's to a major difference that done none of the other matt that seem to a query well it's sub |

0:18:35 | make the something so we look at the |

0:18:37 | P P at C |

0:18:41 | is there some deeper explanation why these are their methods that's fail and and this one is very suitable for |

0:18:47 | the |

0:18:48 | or location in combination with sparse |

0:18:50 | signal process |

0:18:52 | uh i |

0:18:55 | a i haven't really talked much log that |

0:18:57 | but i think that uh |

0:18:59 | signal lip and then make chair maybe |

0:19:02 | oh |

0:19:03 | i the signal dependent sampling each of its planes |

0:19:06 | i |

0:19:06 | like this five |

0:19:08 | this but that might work |

0:19:09 | okay |

0:19:10 | that |

0:19:11 | i don't at |

0:19:16 | we may have for a time for one more question |

0:19:21 | hmmm |

0:19:24 | mouth full |

0:19:25 | one for two |