my

oh

oh

close one can everyone hear me

okay uh

good to all if you my name is uh probably in

and from university of michigan

a this is work is with mad wise set and then killed but and also joint work with profits a

mike flynn and and to and shows that that are

and as a that such just some gonna talk about a a little or compressive sampling at C

that we called as in and P P M

so before explain what random B P a is i need to explain uh what the P P M stands

for

so it stands for a pulse position modulation and and as you can probably and the stand from the name

that information that they basically

is present in the position of the pulses that it but uses

uh

for going to the D is i have to mention that the P P M at C falls and to

the general category of a C is that

oh less time encoders

that is because they can what that would that information into time delays and then digitise their time information using

efficient

D C time to digital can like this

the advantage of that is that it low was that out of the C because you can now replace all

the and the looks at you

in the classical it C with the uh digital parts

and that reduces the power

and also uh

but the current scaling down trends in the is but the chip at a are going down and the gate

delay so going down and

the power supply what is are going down it is much easier to get a finer time resolution

then um is a leash and so making use of all these uh uh selecting all this five

uh this P P M at C was uh proposed a a in those and nine by not uh eight

et cetera

and uh well not going into the details the at sequence consists the P P made is it consists of

uh friends ramp or signal

the am signal is P R T and uh the input signal is compared with this different than friends signal

you continuously

and the points where a the signal uh the um

and just say

a those are the points which are recorded

and uh the because the starting point of the rams are known

and the slope of the times so known

uh the delta that one and L delta out to the

position of the pulses

a gives you the information about the whole data show of the signal at that point as well

so effectively that what they T C at it's is uh

nonuniform signal dependent kind of sampling

and uh classically be uh because of uh a non-uniform it sure you can note the the roots difficult to

use linear reconstruction so

a that a uh some classical classically some reconstruction techniques for use not only trick get things

oh we are going to be did this one to mention that those algorithms need that the

the signal to be sampled at about board uh one point two times the nyquist rate

and below the nyquist rate those on returns i direct

so our goal here is to take this P P a at C and can what in didn't do or

compressive sampling at C

but which we mean that we need a uh sampler in the at see that of the signal X sub

nyquist rate

in the time domain and then of we also on construction reconstruction inside at see that the

will be fast enough and accurate

i need to reconstruct the signal and the frequency domain of course to use that compressive sensing design

we need to assume that does signal is sparse

uh

so i in but because you made a sparse in the frequency domain and it is as sparse which means

it has only S dominant frequencies

so if you saw the coefficients uh i'm bit you'd it has to dig get faster and they should be

only has uh dominant one

and of course a straightforward way to can what the P P M midi seen to uh

compressive sampling it is is to just a at the

uh sub nyquist rate and then of course uh use uh the new reconstruction techniques

we use uh a a a a a a a matching pose a kind of reconstruction technique

um

after that one way

and as we will see that that is much inferior to the and the P P M design what it

what the random and that is just introduced random it's into a system appropriately

uh specifically we can make this starting points of the rams random

and uh

we have tried different a random distributions but the uniform distribution into a uh to the best

so the simulations of the presentations is for that

and uh uh again on so the

sampling is no random um non-uniform

and it but results all the proper is and that by is of the P M design

and we used the uh

uh uh the use to different algorithms to reconstruct the signal both of them are

uh three matching pose a kind of all returns but does since i have time a little time i

this "'cause" only the first target it

uh so uh

warming the measurement matrix of a system before a going to that i just one explain a little but of

how would the measurement matrix is going to be different

from the a classical you compress sensing matrix

so usually in the compressed sensing if we assume there are uh F is you input signal and hand why

are you measurements

we take random linear measurements and from this matrix five

and uh of of if as you assume if you assume that the uh the in a measure and you

the measurements and nothing but the random on grid sampling

and the matrix for is just a collection of random rows from the identity matrix

and if S is sparse in the frequency domain

where site use a dft matrix and X is the representation of S in the frequency domain and we assume

that X is sparse

i want to reconstruct a

so an night measurement matrix is just a multiplication of sight

by time side

which would be a a a sub separate dft matrix

but in the case of P P em five is not a make it's like this but it's an interpolation

matrix now

so be is not going to be a separate it uh idea of team

so if you wanna look at what be is

uh we go to that the and then step it that wasn't the labs iteration

so if you as you that the signal has only one and C S not

uh that is the signal is equal to to the G to by not be and if you if it's

given that there the signal is sampled at time points to you want to take K

then we know that the measurements are have to look like that

so uh and my uh K is the number of measurements that that a C takes

and N is the number of measurements if we sampled at nyquist rates of course we want

keep K much fess than and

so much a matrix is a cake cross in matrix

and using this observation be

uh if you are looking for them

at the frequency F and then they're looking for measurements in this manner

and if you're looking for assume we're looking for measurement in this manner and so on

we can fill up the damage of a matrix and normalize it appropriately

and uh

point to note here is that the measurement matrix is a random and the randomness comes from this points uh

time points do you want to get the signal was sampled

and uh these since these points are uh you non-uniform and they don't lie on any nyquist grid

uh because of that the matrix B uh i does not have to necessarily satisfy any a typical

so we don't check for a right B and rather we

uh make do with something much weaker

we look at the correlations relations between the different columns and they'll expose bounds on them

and use those bonds to for their give a at a and is what or you got it

so uh uh and uh

so reconstruction algorithm

uh it's so it's just

so uh is similar to a uh any matching and really was it only them and stuff going into the

details of this one mention that

it has to blocks the frequency identification block and the coefficient estimation collection

and the most intense step in this plot is that least squares

which we to do with the digits it's next iteration

and uh the the coefficient estimations step is present inside the iterations

and the most intensive part here is the might it an of uh be transpose times a

the residual

and uh

because of the special structure of a matrix B we can formulate the be transposed are

uh

uh inverse and you have fifty

and we can use some existing algorithms for attending this item a hundred of um

out of N log N

and so if you number of iterations but um is i

the uh average run-time is order of i and log in

so do not just to have it be for a look at why and how the algorithm works

the be if you look at this to be transposed are

initially i is nothing but they that measurement Y

so that is nothing but be transposed be times et

and we can prove that if you have a and F number of measurements K when K is big enough

for as is the sparsity of the signal

it's lance is some constant

we can show that the a diagonal elements of for this guy matrix is uh are a small enough

and they can so the prove that

the estimate does that we get

their expected values uh quite close to the original value you don't that variance is bounded by the

a energy of the signal and

a brief sketch of the proof

uh uh prove has more of a a kind of leading to it

uh so we can for the prove that Y is uh if like

the be it's is i U but as i sit it or

and when gaze big enough we can prove that would probably be one minus or of that's learn squared

this kind of a and T

a X S on the uh at is is the

uh best test "'em" presentation of a signal X and the uh a constant at

for is kind of a signal dependent constant because

uh a it is a constant that separates the dominant frequency components from the non dominant ones for example in

this figure

uh they are five frequency components but we are in to sort only the three dominant once

and this as five kind of a threshold which shows separates is uh a company and stuff that is there

so which goes into this that's pressure

so uh i a set four

in the first iteration uh that estimate as

i quite close to the actual value but the variance is quite high because the a signal is left

i to be estimated

and that is indicated but this long that and so here

and they can prove that the probability of one man sort of epsilon square at good fraction of the use

of a a if i correctly

a ones those that i in fact is to make at the

so their contribution can be subtracted from the signal and you that's it you can be up to

and

so because a good fraction of that at fight in the first iteration

the variance not drops down because uh

the amount of energy in the signal also goes down

and we can prove that with a similar probability get good fraction of those uh is which switch not identified

properly in the first tradition

but lower be identified T

so the net number of frequencies are i it got goes up

and

so that the the result of that is that the variance skip going down and down from iteration to iteration

eventually

uh after sufficient number of iterations the at is small enough that

all the estimate this can be identified correctly

and then estimate only if we could just can be identified T and then estimate

oh are going to uh some

sort of a target it them

so we have a series of uh results that support the at

and uh the first one is way really construct might be don't signals with the on with them one that

just discussed

and that might be don't signals that just mean on signals with the

oh a linear combination of sinusoids and the sign so it's have random phases and frequencies

and they have a a you

and uh of this kind of signal is that take in and then we add additional white gaussian noise to

it

and reconstructed it the different input as a non levels the signal is sampled with both the regular P P

and design it is the with the

no than a P P M

and also with their and then P M and then reconstruct

the line line points to the regular P M and the blankets language to the and them one and the

black line response to over

benchmark that we colours as the estimate quest

which is nothing but the input signal sampled at the nyquist rate

the same quantization level as the as that of these at he's and then truncated to people only the estimate

in the frequent

i min

so that truncation actually improves the performance of the benchmarks so this is uh

and good benchmark

and uh uh uh uh a and at the point to note here is even when the input as and

i mean noise is like gaussian noise is added

because of the fine at the time resolution as and team at C there is some more

uh a quantization it that already present a signal

and a looking at the results as we can see the adding random to the system definitely improves the performance

of the at C

is is uh a a a that and P P and performs much better than this one at all snr

levels and

is separating the benchmark a much closer to it

that is going to the back correlation properties of the as measure

and then that point to what is that as we increase the number of tones in the signal at is

we make the signal less sparse

the

the a good but is a lot of degradation in the performance of the constant be but the regular P

M bit as and the and a P P is

the could be an affected compared to the benchmark

so i same number of measurements the than the B M is or

we can skate less but signals much better than the regular P B

in a second experiment is also just a proof of concept it's movement meant where with a steak a simple

one don't signal and then we construct a reading number of uh measurements and rating the input as an a

and i'll explain the poor out and the tight first but what as an hour is they X axis on

the Y y-axis is given by the percent at sampling needed for sets as

but it posted at sampling it simply this uh issue okay oh what and

and uh sets as is some criteria we define as an example for this to the

a particular experiment

and was it down sampling the it was as is the least uh as the used number of samples

that you need to succeed uh using this terms

so can as you can see in that class

but i and a P B M needs a much like a much less measurements just that C

and these and and an increases this

quickly dropped down to about three percent and stays about the same and the gap but also increases as as

and not increases

and the have on the left is the case when there is no arts about motion was and only the

measurement noise

even in this case the

the regular P P i'm kind of big it's

once you go as than twenty for about twenty percent of the measurements

but as the

the you the trying to be B can go as low as the three or two

uh you know next experiment i would this one a mentioned that uh

uh a of via a have are dealt with the on with frequencies in these two experiments in this that's

them and read look at an off peak frequency and how they with them forms

oh a frequency i mean the frequency which is lies

all the nyquist read that we searching on

and we know that that "'cause" this the spectral leakage and adversely affects effects a sparsity

and V uh it prior to encounter that using the hamming window approach we multiply the signal with a hamming

window uh or the sampling so the sampling process is an effective

and since the having window is uh a reversible

it's nonzero at all times so we can we can be words it after the reconstruction

uh i as you can see uh the performance with the many are definitely improves the performance of the system

are close close it to the benchmark

i i i as an at low snr reversing the hamming window to the noise so it to doesn't work

well

and have an plot at the regular people "'em" here because the old them doesn't come at all

uh in our next experiment we look at uh something of a bad because signal at is an fm signal

and we amplitude thirty three percent the nyquist rate and we have a similar results that the hamming window the

performance of improves

a log

and same for the him signal

and uh going into the all with them do i just one mention here that the

we not get them to you "'em" some action

conditions on the signal and because of that very able to reduce the number of iterations from i to just

one

and so it is computationally very less it's than a read them one

and uh uh its performance is the compare will go on with them one at high snrs

i it low an it actually does for much better than of them one

so if you know that the if you know the actually conditions on the input signal that side place and

then

or also if there is an is a little all with them to was back

and uh we have a a similar results for the uh practical signals the and it from signals

and for the for the T that the all with them to i would ask you to refer to the

people

and uh

so i in confusion so we have a

compressive sampling a a them

P P M at C

uh that

that keeps all the advantages of the regular B "'em" at C and also takes all the methods of the

compressive a sampling technique

since that's they is that mike which it can handle uh

and signals that the less parts

and signals knows with of at frequencies and

the reconstruction algorithms are are simple in and can be made simpler

uh for practical hardware in

should

uh

so that they can my presentation thank i attention

i i mean just at then you slightly showed uh of core frequency estimation

so you all the significant eating image always a sick the signal last a which to performance right

um

would you consider oh have you tried using uh uh

a fine else

speaker the fact um frequency grid for construction

a a frequency they definitely improves performance the again it increases the competition

and uh

the you want to implement it can hard the we want to keep the that simple last

so instead we trying to use a hamming window approach

the jokes

pretty good yeah

thank you

uh

i was wondering why do you choose a pulse position modulation as to a D conversion technique can have you

compared to this

like a at most that's

yeah exploit sparsity are quite a few of them and so on

yeah so in uh be five to look at a few uh are uh a two D time encoder techniques

but most of them

uh they this one a continuous time T S P proposed for prove process to with this and it kind

of uh

it has a higher power than the um

P M I D C and it works in the analog domain

and can it to based on the main leaves no advantage that's fifty

i and they uh even a finite rate of innovation

kind of a little bit unstable stable than our method i

and uh

uh

kind of also needs close to a nyquist rate sampling and that's

sub nyquist sampling

that's to a major difference that done none of the other matt that seem to a query well it's sub

make the something so we look at the

P P at C

is there some deeper explanation why these are their methods that's fail and and this one is very suitable for

the

or location in combination with sparse

signal process

uh i

a i haven't really talked much log that

but i think that uh

signal lip and then make chair maybe

oh

i the signal dependent sampling each of its planes

i

like this five

this but that might work

okay

that

i don't at

we may have for a time for one more question

hmmm

mouth full

one for two