0:00:13thank you
0:00:14um so this talk will be about a a strain random demodulator this is work that i did with my
0:00:19adviser
0:00:20rubber colour bank and also with while he buys well
0:00:24um so first a little history of sampling
0:00:27um we start with the the well known result uh from chan and
0:00:32and nyquist that if you have a band one band limited signal
0:00:36then um
0:00:38you can um perfectly reconstruct that signal if you have enough samples
0:00:42um another result that was done later by try for forced stand in their early seventies
0:00:48they looked at taking sub nyquist samples of a signal
0:00:52and then doing parameter estimation to identify the parameters of
0:00:57a single tone
0:00:58in a large band with
0:01:00um and more recently we have the
0:01:03results of compressed sensing
0:01:05um that explores sparsity
0:01:07um as a prior on the input signals
0:01:11um and some examples are um chirp sampling example playing and the random demodulator
0:01:18a a little um brief note about compressed sensing since i'm sure most of your well aware of
0:01:24of it uh we have an under determined system of linear equations
0:01:28and uh sparse input vector which were taking um when you're measurements of
0:01:33if our measurement matrix five
0:01:36um it satisfies a near i a tree
0:01:39um that's captured in the restricted isometry property
0:01:43then we can um give
0:01:45very nice statements about um signal reconstruction of that sparse vector
0:01:52and so um i another line of research that started back um um with pro only around the time of
0:01:58the french revolution
0:02:00is um more parameter estimation um probably was trying to
0:02:05um
0:02:08to approximate a curve um of this form
0:02:11um given a set of samples
0:02:13and so if this is what you're trying to if you're trying to fit your curve
0:02:17uh this curve then you need at least two and point
0:02:20um to be able to do that
0:02:22and from this approach we can see that the the channel approach gives you a a band limited signal has
0:02:29um one over T degrees of freedom per unit time
0:02:32and so if you sample
0:02:35um
0:02:37at the right one over T
0:02:38then you will have a sufficient representation of that signal
0:02:43um and this idea was generalized by um but early
0:02:47um more recently um in the idea of the rate of innovation
0:02:51and so you if any signal a more general than band limited if it can be described by
0:02:57a a um
0:02:58finite number of degrees of freedom for unit time um which you calls the rate of innovation
0:03:03then that is the necessary sampling rate to be able to
0:03:07describe that signal
0:03:10so now we will concentrate on um the um on compressed sensing and in particular the random demodulator
0:03:17which um
0:03:19was
0:03:20presented by drop and and his call there's
0:03:24um the basic idea behind the random demodulator is that you have a signal
0:03:28that you're trying to sample you multiplied by a a random waveform
0:03:33and then you integrate that um this product over a long time that's longer than the nyquist rate of the
0:03:41signal
0:03:42then you take samples at the output of that and the greater
0:03:45which X as a low-pass filter
0:03:48um this
0:03:49continuous time process can be um
0:03:53presented as a discrete
0:03:54discrete time process described by
0:03:56um this matrix T which represents a multiplication by the random waveform
0:04:01and the matrix H which is this um and integration operator
0:04:06um so our measurement matrix
0:04:08acting on sparse
0:04:10um
0:04:12frequency vectors is given by the product H times T times have where F is the
0:04:17um inverse fourier transform matrix
0:04:22so um and so
0:04:24this
0:04:26this rate that you're taking samples that is much lower than the nyquist rate of the sim of the signal
0:04:32of your input signal
0:04:33but are you really slow or the nyquist
0:04:36so um
0:04:38the key idea here is that this random waveform from generator of the random the modulator
0:04:42requires a a random waveform that switches at the nyquist rate of the input signal
0:04:48and
0:04:49the E
0:04:50um and the reason we want to sample at a rate lower the nyquist is because it is hard to
0:04:56build
0:04:57um
0:04:58high fidelity high rate um and to digital converters
0:05:02because of the um
0:05:06to limit that the capacitors in the circuits place on the time it takes to um change states of that
0:05:12and what to digital converter
0:05:14and so the general rule is that a doubling of the sampling rate um
0:05:17corresponds to a one bit reduction in the
0:05:20resolution of the at C
0:05:22and so generating and since this random waveform is generated from the same capacitors it maybe just as hard
0:05:29to generate a a um fast lease switching
0:05:33um waveform form as it is to build a high speed analog to digital converter
0:05:40and so um uh along the same lines will take and the side back to the days of magnetic recording
0:05:46um and the problem here is that um engineers were trying to write lots of data on these magnetic disks
0:05:53and the fact that
0:05:55uh um your idealise square poles
0:05:57you um can't actually record in practice
0:06:00so what you're left with is this
0:06:02um
0:06:03smooth poles
0:06:05and these smooth pulses
0:06:07have trouble if you try to space them too close together
0:06:11and so that transitions what you're trying to detect in the waveform are given by these altered these peaks are
0:06:16alternating and sign
0:06:18and the ability of your read back mechanism to detect the transitions is compromised
0:06:24um when the
0:06:25peaks are shifted and produced or changed and amplitude
0:06:30and so the challenge for these engineers was how to write data
0:06:34um while keeping the um transitions
0:06:38um separate enough in time so that you limit your intersymbol interference
0:06:43so there's solution was run like limited codes
0:06:46these are codes written as in or as the I data
0:06:50and there
0:06:51parameter by
0:06:52um D in K where D is the minimum number of zeros between
0:06:57um ones these
0:06:59the zeros in your sequence indicate a
0:07:02no transition in your in or you are waveform of the ones
0:07:05represent trend transition
0:07:08um and here K is the um
0:07:10maximum number of zeros between
0:07:12and he want so the maximum time
0:07:14um
0:07:15between transitions
0:07:17and so the
0:07:18um D represents the or or so
0:07:22separation between
0:07:23um transitions and
0:07:25K A and timing recovery which was necessary for these uh magnetic
0:07:31this magnetic recording situation
0:07:33um and so
0:07:34what what we come up with is a rate loss and increase correlations
0:07:38um when we use the
0:07:40D K constraint sequences
0:07:43um but the
0:07:44advantages manages rate gain from
0:07:46from and increase transition time from spacing
0:07:49are data bits
0:07:50more finely than the transitions in the waveform
0:07:54um it's only a an example of an oral a code that was used and um B M just drives
0:08:00is the two seven code
0:08:01and here we see we have a it's a rate one have code
0:08:04but we have a a factor of three rate again
0:08:07um in or minimum transition spacing
0:08:10and so we see a fifty percent gain in real coding density
0:08:15and so we use this same idea
0:08:17and the random demodulator
0:08:19and replace the
0:08:21unconstrained waveform form of uh the random demodulator with the constraint are wave form
0:08:27and so we have a a waveform that switches
0:08:31the or that has a a larger with between transitions
0:08:35um meaning that we can
0:08:39measure are
0:08:41um the nyquist rate of our signal
0:08:43can be higher given a fixed minimum transition with
0:08:47in our waveform
0:08:50but what do these correlations to for to the properties of our round of the modulator
0:08:56um with this again is are measurement matrix five
0:08:59um given by the product H D F
0:09:02and each entry is a a
0:09:04um some of randomly sign
0:09:07um fourier
0:09:09components of the for a matrix
0:09:12and so if we want
0:09:14five the satisfy the R I P
0:09:16then
0:09:17um we want to be a a your i some tree
0:09:20um which is captured in this tape the right here
0:09:23where we're saying that
0:09:24or are
0:09:25um measurement matrix is newly an orthonormal system
0:09:30and so if we look at how good is
0:09:33on an average
0:09:35um measure um and average system
0:09:37given our constraint sequences then we see that
0:09:41we have
0:09:42in expectation we have the identity matrix plus this matrix to
0:09:47which is given right here and you can see it
0:09:49um
0:09:50completely determined by the correlation in our seek one
0:09:54and so you have
0:09:56um and so we've converted this problem in two
0:09:59a problem looking at
0:10:01um in on average anyway
0:10:03at the matrix to
0:10:04and how it performs
0:10:07and are note that this um
0:10:09this function at a right here which will come up later
0:10:12is the inner product between
0:10:14the columns of this matrix H
0:10:17so so um looking at the spectral norm of the matrix to
0:10:22we look at it the gram matrix
0:10:24and so each sure the gram matrix is given by this
0:10:27um which depends on this function F had
0:10:31which we call the windowed spectrum because
0:10:34it is the
0:10:35so it's the spectrum
0:10:37of a are random random sequence
0:10:40which is the um for a transform of the autocorrelation function
0:10:44um but it's multiplied by this
0:10:46um window function
0:10:49but
0:10:49um as W over are in create which W O W is the um size of your
0:10:56input vectors and R as
0:10:57a number of measurements and so is that gets large
0:11:00which is the situation we're looking at this window gets larger
0:11:04and this windowed spectrum can be approximated by
0:11:08the actual spectrum
0:11:10of of the random waveform taking the account the lack of the in B zero term
0:11:15so this minus one right here
0:11:17so our gram matrix becomes a
0:11:20a diagonal matrix with the
0:11:22square of the spectrum
0:11:25on the diagonal and so are
0:11:28um
0:11:29spectral norm
0:11:30of the matrix to is or the worst-case case spectral norm of our matrix delta to is
0:11:36determined by the
0:11:39um
0:11:41the part of our spectrum that is farthest away from one
0:11:45and so if we look at some examples of
0:11:48specific examples of random waveforms we see
0:11:52um for the rounded the module a which uses and um on constrained independent seek once we have a flat
0:11:57spectrum
0:11:58um um which gives us
0:12:00a a
0:12:01um
0:12:01are matrix delta to is um
0:12:04identically zero and so we see that are
0:12:07um the system proof but provides
0:12:10uniform illumination for all sparse input vectors
0:12:15and for the second example we looked at a a repetition coded sequence
0:12:20and so these repetition codes
0:12:22take a a a um and independent ra to a sequence
0:12:26and repeat each entry one time so that every pair of entries is completely dependent
0:12:32um
0:12:33and this is a plot of the spectrum of such as once
0:12:37and you can see that at high frequencies
0:12:39the spectrum rolls off to zero
0:12:41which means that are
0:12:43um spectrum norm of our delta matrix is
0:12:46um
0:12:47because to one
0:12:49and
0:12:50we we will um do not expect that are I P to be satisfied if we use these repetition coded
0:12:56sequences
0:12:57and and all um
0:12:59because of these high frequencies are not well illuminated
0:13:04but so uh third
0:13:05um random way from that we consider
0:13:08are these uh we call in general rll sequences and so they are
0:13:12generated from a markov chain that imposes are
0:13:16um can straight and are K constraint
0:13:18on the um transition with of the waveform
0:13:22uh the autocorrelation of such a a sequence is given by this
0:13:27um which depends on the
0:13:28transition probability matrix of are
0:13:31um markov chain
0:13:32and also the output symbols
0:13:35where um the vector B here is just a a collection of all the outputs of symbols for each state
0:13:41and a a is the vector B point wise multiplied by the
0:13:45um
0:13:49but the stationary distribution
0:13:51of R markov chain
0:13:53and because are um vector B is orthogonal to the all ones vector
0:13:59um we have that the
0:14:01are
0:14:02autocorrelation correlation um decays geometrically at a rate that depends on the second largest eigenvalue of the matrix P
0:14:11so we here in this part we um illustrate the
0:14:15um geometric tk K
0:14:16um of the
0:14:18of the oral sequences
0:14:20and here you can see this group of uh plots right here's for D equals one
0:14:25and these are pure for D equals to so you can see that the
0:14:29lower the value of D the faster the correlation is in the sequel
0:14:34and there's also a slight dependence on K but that's much less pronounced than the dependence on D
0:14:41so here on on this this plot gives an example spectrum for uh D equals one K cost twenty
0:14:47rll sequence
0:14:49and here you can see that
0:14:51the spectrum um rolls off at high frequency
0:14:54but it does not roll off to zero
0:14:57um so the spectral norm of are don't to matrix
0:15:01um
0:15:02will not go to one
0:15:04and we will
0:15:05um have some expectation that that all P is satisfied
0:15:10and for to that you notice here that
0:15:12um the region marked one here um the for a low pass signals
0:15:17um um the spectrum is very close to one
0:15:20and for the high pass signals mark of the two
0:15:24spectrum it's very close to zero
0:15:26and so if
0:15:27we restrict our input signals
0:15:30to a low-pass and high-pass respectively
0:15:33then we would expect to see different performance
0:15:35in reconstruction
0:15:37and so that's what we did
0:15:39these plots show
0:15:40um
0:15:41the probability of reconstruction versus sparsity
0:15:45um for
0:15:46low frequency and high frequency signals and
0:15:50um i should know the these values for band with are normalized so that
0:15:54the both the unconstrained and the constraint sequences
0:15:58have the same um transition with minimum transition with but um
0:16:03or minimum with between transitions in the waveform
0:16:07and so we see that are constrained
0:16:10random demodulator performs better than the random demodulator for low pass signals
0:16:14um but it performs worse for high pass signals
0:16:19so what this
0:16:19tells is is that are modulating sequence can be chosen
0:16:24um based on its spectrum
0:16:26to eliminate different regions of are the of the spectrum of our input signals
0:16:34and finally there's also even if we don't restrict ourselves to a high pass a low pass signals we still
0:16:40see a uh
0:16:41and advantage in the
0:16:43um band with of the signals that we and um acquire and reconstruct
0:16:49so here you see
0:16:50um
0:16:51these are plots for the random demodulator and are constrained random the modulator and if you allow a
0:16:57um a thirteen percent reduction
0:17:00in your allowed sparsity that you can still see a twenty percent gain
0:17:04in the band with your able to you
0:17:07and so there is a trade-off between the sparsity and the
0:17:10a with of your input signals
0:17:14and finally um we note that are
0:17:17so are theoretical results on the are I P
0:17:20um include the this maximum dependence distance in and a random waveform and the matrix to
0:17:27and we see a slight
0:17:28um inflation in are allowed
0:17:31um sampling rate
0:17:33and so the take aways
0:17:35or that
0:17:36um
0:17:37there's is a
0:17:38uh band with increase if you
0:17:41given a restriction on the minimum transition with your waveform
0:17:44um given fidelity constraints
0:17:47and the tradeoff between sparsity and band with
0:17:49and that
0:17:50you can also make your demodulator tunable
0:17:53so based on the spectrum you can
0:17:56um
0:17:57the late different input signals
0:18:00thank you but i'm
0:18:06we should have time for
0:18:08maybe to question maybe you one a half
0:18:15i i in you um simulations are was you mean that you uh you have that perfect
0:18:21uh a binary sequence that the that you you putting it the system so you you you've
0:18:25constrained it but it is still but but sequence could yeah they're still perfect also so
0:18:31didn't you motivation for doing this was the the these uh and the uh as i stand of the magnetic
0:18:35media argument is that these things would be but that's is yeah
0:18:40have have you looked to will the effect is that would be a how we see is that C uh
0:18:43in cat show in in the the fine make sure
0:18:46so we did a look at that but we had trouble
0:18:50finding a way to capture that in the discrete model that we're using
0:18:55so so is that is that gonna
0:18:57as a uh uh a a a a a big problem with
0:19:00using this is uh in in it's
0:19:03um
0:19:05that so we think that use so using these constraints sequences what
0:19:09um read the effect of that is in perfect imperfect as
0:19:14um that was our motivation for using these constraint sequences
0:19:17but i don't know um specifically what the effects of on
0:19:22putting this and the practise
0:19:27you have a time for for equal question
0:19:35um
0:19:36and it was that you shall can you elaborate a little bit
0:19:38how how you all that a discrete
0:19:41in in frequency see or
0:19:43yeah
0:19:44i
0:19:45um
0:19:47i
0:19:47i
0:19:48a
0:19:49okay
0:19:50uh
0:19:50oh
0:19:52then we will
0:19:54vol
0:19:56so
0:19:57the four
0:19:59well
0:20:01um
0:20:03yeah i
0:20:05okay
0:20:06you
0:20:08yeah
0:20:14okay thank you very much
0:20:15i