thank you

um so this talk will be about a a strain random demodulator this is work that i did with my

rubber colour bank and also with while he buys well

um so first a little history of sampling

um we start with the the well known result uh from chan and

and nyquist that if you have a band one band limited signal

then um

you can um perfectly reconstruct that signal if you have enough samples

um another result that was done later by try for forced stand in their early seventies

they looked at taking sub nyquist samples of a signal

and then doing parameter estimation to identify the parameters of

a single tone

in a large band with

um and more recently we have the

results of compressed sensing

um that explores sparsity

um as a prior on the input signals

um and some examples are um chirp sampling example playing and the random demodulator

a a little um brief note about compressed sensing since i'm sure most of your well aware of

of it uh we have an under determined system of linear equations

and uh sparse input vector which were taking um when you're measurements of

if our measurement matrix five

um it satisfies a near i a tree

um that's captured in the restricted isometry property

then we can um give

very nice statements about um signal reconstruction of that sparse vector

and so um i another line of research that started back um um with pro only around the time of

the french revolution

is um more parameter estimation um probably was trying to

um

to approximate a curve um of this form

um given a set of samples

and so if this is what you're trying to if you're trying to fit your curve

uh this curve then you need at least two and point

um to be able to do that

and from this approach we can see that the the channel approach gives you a a band limited signal has

um one over T degrees of freedom per unit time

and so if you sample

um

at the right one over T

then you will have a sufficient representation of that signal

um and this idea was generalized by um but early

um more recently um in the idea of the rate of innovation

and so you if any signal a more general than band limited if it can be described by

a a um

finite number of degrees of freedom for unit time um which you calls the rate of innovation

then that is the necessary sampling rate to be able to

describe that signal

so now we will concentrate on um the um on compressed sensing and in particular the random demodulator

which um

was

presented by drop and and his call there's

um the basic idea behind the random demodulator is that you have a signal

that you're trying to sample you multiplied by a a random waveform

and then you integrate that um this product over a long time that's longer than the nyquist rate of the

signal

then you take samples at the output of that and the greater

which X as a low-pass filter

um this

continuous time process can be um

presented as a discrete

discrete time process described by

um this matrix T which represents a multiplication by the random waveform

and the matrix H which is this um and integration operator

um so our measurement matrix

acting on sparse

um

frequency vectors is given by the product H times T times have where F is the

um inverse fourier transform matrix

so um and so

this

this rate that you're taking samples that is much lower than the nyquist rate of the sim of the signal

but are you really slow or the nyquist

so um

the key idea here is that this random waveform from generator of the random the modulator

requires a a random waveform that switches at the nyquist rate of the input signal

and

the E

um and the reason we want to sample at a rate lower the nyquist is because it is hard to

build

um

high fidelity high rate um and to digital converters

because of the um

to limit that the capacitors in the circuits place on the time it takes to um change states of that

and what to digital converter

and so the general rule is that a doubling of the sampling rate um

corresponds to a one bit reduction in the

resolution of the at C

and so generating and since this random waveform is generated from the same capacitors it maybe just as hard

to generate a a um fast lease switching

um waveform form as it is to build a high speed analog to digital converter

and so um uh along the same lines will take and the side back to the days of magnetic recording

um and the problem here is that um engineers were trying to write lots of data on these magnetic disks

and the fact that

uh um your idealise square poles

you um can't actually record in practice

so what you're left with is this

um

smooth poles

and these smooth pulses

have trouble if you try to space them too close together

and so that transitions what you're trying to detect in the waveform are given by these altered these peaks are

alternating and sign

and the ability of your read back mechanism to detect the transitions is compromised

um when the

peaks are shifted and produced or changed and amplitude

and so the challenge for these engineers was how to write data

um while keeping the um transitions

um separate enough in time so that you limit your intersymbol interference

so there's solution was run like limited codes

these are codes written as in or as the I data

and there

parameter by

um D in K where D is the minimum number of zeros between

um ones these

the zeros in your sequence indicate a

no transition in your in or you are waveform of the ones

represent trend transition

um and here K is the um

maximum number of zeros between

and he want so the maximum time

um

between transitions

and so the

um D represents the or or so

separation between

um transitions and

K A and timing recovery which was necessary for these uh magnetic

this magnetic recording situation

um and so

what what we come up with is a rate loss and increase correlations

um when we use the

D K constraint sequences

um but the

from and increase transition time from spacing

are data bits

more finely than the transitions in the waveform

um it's only a an example of an oral a code that was used and um B M just drives

is the two seven code

and here we see we have a it's a rate one have code

but we have a a factor of three rate again

um in or minimum transition spacing

and so we see a fifty percent gain in real coding density

and so we use this same idea

and the random demodulator

and replace the

unconstrained waveform form of uh the random demodulator with the constraint are wave form

and so we have a a waveform that switches

the or that has a a larger with between transitions

um meaning that we can

measure are

um the nyquist rate of our signal

can be higher given a fixed minimum transition with

in our waveform

but what do these correlations to for to the properties of our round of the modulator

um with this again is are measurement matrix five

um given by the product H D F

and each entry is a a

um some of randomly sign

um fourier

components of the for a matrix

and so if we want

five the satisfy the R I P

then

um we want to be a a your i some tree

um which is captured in this tape the right here

where we're saying that

or are

um measurement matrix is newly an orthonormal system

and so if we look at how good is

on an average

um measure um and average system

given our constraint sequences then we see that

we have

in expectation we have the identity matrix plus this matrix to

which is given right here and you can see it

um

completely determined by the correlation in our seek one

and so you have

um and so we've converted this problem in two

a problem looking at

um in on average anyway

at the matrix to

and how it performs

and are note that this um

this function at a right here which will come up later

is the inner product between

the columns of this matrix H

so so um looking at the spectral norm of the matrix to

we look at it the gram matrix

and so each sure the gram matrix is given by this

um which depends on this function F had

which we call the windowed spectrum because

it is the

so it's the spectrum

of a are random random sequence

which is the um for a transform of the autocorrelation function

um but it's multiplied by this

um window function

but

um as W over are in create which W O W is the um size of your

input vectors and R as

a number of measurements and so is that gets large

which is the situation we're looking at this window gets larger

and this windowed spectrum can be approximated by

the actual spectrum

of of the random waveform taking the account the lack of the in B zero term

so this minus one right here

so our gram matrix becomes a

a diagonal matrix with the

square of the spectrum

on the diagonal and so are

um

spectral norm

of the matrix to is or the worst-case case spectral norm of our matrix delta to is

determined by the

um

the part of our spectrum that is farthest away from one

and so if we look at some examples of

specific examples of random waveforms we see

um for the rounded the module a which uses and um on constrained independent seek once we have a flat

spectrum

um um which gives us

a a

um

are matrix delta to is um

identically zero and so we see that are

um the system proof but provides

uniform illumination for all sparse input vectors

and for the second example we looked at a a repetition coded sequence

and so these repetition codes

take a a a um and independent ra to a sequence

and repeat each entry one time so that every pair of entries is completely dependent

um

and this is a plot of the spectrum of such as once

and you can see that at high frequencies

the spectrum rolls off to zero

which means that are

um spectrum norm of our delta matrix is

um

because to one

and

we we will um do not expect that are I P to be satisfied if we use these repetition coded

sequences

and and all um

because of these high frequencies are not well illuminated

but so uh third

um random way from that we consider

are these uh we call in general rll sequences and so they are

generated from a markov chain that imposes are

um can straight and are K constraint

on the um transition with of the waveform

uh the autocorrelation of such a a sequence is given by this

um which depends on the

transition probability matrix of are

um markov chain

and also the output symbols

where um the vector B here is just a a collection of all the outputs of symbols for each state

and a a is the vector B point wise multiplied by the

um

but the stationary distribution

of R markov chain

and because are um vector B is orthogonal to the all ones vector

um we have that the

are

autocorrelation correlation um decays geometrically at a rate that depends on the second largest eigenvalue of the matrix P

so we here in this part we um illustrate the

um geometric tk K

um of the

of the oral sequences

and here you can see this group of uh plots right here's for D equals one

and these are pure for D equals to so you can see that the

lower the value of D the faster the correlation is in the sequel

and there's also a slight dependence on K but that's much less pronounced than the dependence on D

so here on on this this plot gives an example spectrum for uh D equals one K cost twenty

rll sequence

and here you can see that

the spectrum um rolls off at high frequency

but it does not roll off to zero

um so the spectral norm of are don't to matrix

um

will not go to one

and we will

um have some expectation that that all P is satisfied

and for to that you notice here that

um the region marked one here um the for a low pass signals

um um the spectrum is very close to one

and for the high pass signals mark of the two

spectrum it's very close to zero

and so if

we restrict our input signals

to a low-pass and high-pass respectively

then we would expect to see different performance

in reconstruction

and so that's what we did

these plots show

um

the probability of reconstruction versus sparsity

um for

low frequency and high frequency signals and

um i should know the these values for band with are normalized so that

the both the unconstrained and the constraint sequences

have the same um transition with minimum transition with but um

or minimum with between transitions in the waveform

and so we see that are constrained

random demodulator performs better than the random demodulator for low pass signals

um but it performs worse for high pass signals

so what this

tells is is that are modulating sequence can be chosen

um based on its spectrum

to eliminate different regions of are the of the spectrum of our input signals

and finally there's also even if we don't restrict ourselves to a high pass a low pass signals we still

see a uh

um band with of the signals that we and um acquire and reconstruct

so here you see

um

these are plots for the random demodulator and are constrained random the modulator and if you allow a

um a thirteen percent reduction

in your allowed sparsity that you can still see a twenty percent gain

in the band with your able to you

and so there is a trade-off between the sparsity and the

a with of your input signals

and finally um we note that are

so are theoretical results on the are I P

um include the this maximum dependence distance in and a random waveform and the matrix to

and we see a slight

um inflation in are allowed

um sampling rate

and so the take aways

or that

um

there's is a

uh band with increase if you

given a restriction on the minimum transition with your waveform

um given fidelity constraints

and the tradeoff between sparsity and band with

and that

you can also make your demodulator tunable

so based on the spectrum you can

um

the late different input signals

thank you but i'm

we should have time for

maybe to question maybe you one a half

i i in you um simulations are was you mean that you uh you have that perfect

uh a binary sequence that the that you you putting it the system so you you you've

constrained it but it is still but but sequence could yeah they're still perfect also so

didn't you motivation for doing this was the the these uh and the uh as i stand of the magnetic

media argument is that these things would be but that's is yeah

have have you looked to will the effect is that would be a how we see is that C uh

in cat show in in the the fine make sure

so we did a look at that but we had trouble

finding a way to capture that in the discrete model that we're using

so so is that is that gonna

as a uh uh a a a a a big problem with

using this is uh in in it's

um

that so we think that use so using these constraints sequences what

um read the effect of that is in perfect imperfect as

um that was our motivation for using these constraint sequences

but i don't know um specifically what the effects of on

putting this and the practise

you have a time for for equal question

um

and it was that you shall can you elaborate a little bit

how how you all that a discrete

in in frequency see or

yeah

i

um

i

i

a

okay

uh

oh

then we will

vol

so

the four

well

um

yeah i

okay

you

yeah

okay thank you very much

i