0:00:13 | thank you |
---|---|

0:00:14 | um so this talk will be about a a strain random demodulator this is work that i did with my |

0:00:19 | adviser |

0:00:20 | rubber colour bank and also with while he buys well |

0:00:24 | um so first a little history of sampling |

0:00:27 | um we start with the the well known result uh from chan and |

0:00:32 | and nyquist that if you have a band one band limited signal |

0:00:36 | then um |

0:00:38 | you can um perfectly reconstruct that signal if you have enough samples |

0:00:42 | um another result that was done later by try for forced stand in their early seventies |

0:00:48 | they looked at taking sub nyquist samples of a signal |

0:00:52 | and then doing parameter estimation to identify the parameters of |

0:00:57 | a single tone |

0:00:58 | in a large band with |

0:01:00 | um and more recently we have the |

0:01:03 | results of compressed sensing |

0:01:05 | um that explores sparsity |

0:01:07 | um as a prior on the input signals |

0:01:11 | um and some examples are um chirp sampling example playing and the random demodulator |

0:01:18 | a a little um brief note about compressed sensing since i'm sure most of your well aware of |

0:01:24 | of it uh we have an under determined system of linear equations |

0:01:28 | and uh sparse input vector which were taking um when you're measurements of |

0:01:33 | if our measurement matrix five |

0:01:36 | um it satisfies a near i a tree |

0:01:39 | um that's captured in the restricted isometry property |

0:01:43 | then we can um give |

0:01:45 | very nice statements about um signal reconstruction of that sparse vector |

0:01:52 | and so um i another line of research that started back um um with pro only around the time of |

0:01:58 | the french revolution |

0:02:00 | is um more parameter estimation um probably was trying to |

0:02:05 | um |

0:02:08 | to approximate a curve um of this form |

0:02:11 | um given a set of samples |

0:02:13 | and so if this is what you're trying to if you're trying to fit your curve |

0:02:17 | uh this curve then you need at least two and point |

0:02:20 | um to be able to do that |

0:02:22 | and from this approach we can see that the the channel approach gives you a a band limited signal has |

0:02:29 | um one over T degrees of freedom per unit time |

0:02:32 | and so if you sample |

0:02:35 | um |

0:02:37 | at the right one over T |

0:02:38 | then you will have a sufficient representation of that signal |

0:02:43 | um and this idea was generalized by um but early |

0:02:47 | um more recently um in the idea of the rate of innovation |

0:02:51 | and so you if any signal a more general than band limited if it can be described by |

0:02:57 | a a um |

0:02:58 | finite number of degrees of freedom for unit time um which you calls the rate of innovation |

0:03:03 | then that is the necessary sampling rate to be able to |

0:03:07 | describe that signal |

0:03:10 | so now we will concentrate on um the um on compressed sensing and in particular the random demodulator |

0:03:17 | which um |

0:03:19 | was |

0:03:20 | presented by drop and and his call there's |

0:03:24 | um the basic idea behind the random demodulator is that you have a signal |

0:03:28 | that you're trying to sample you multiplied by a a random waveform |

0:03:33 | and then you integrate that um this product over a long time that's longer than the nyquist rate of the |

0:03:41 | signal |

0:03:42 | then you take samples at the output of that and the greater |

0:03:45 | which X as a low-pass filter |

0:03:48 | um this |

0:03:49 | continuous time process can be um |

0:03:53 | presented as a discrete |

0:03:54 | discrete time process described by |

0:03:56 | um this matrix T which represents a multiplication by the random waveform |

0:04:01 | and the matrix H which is this um and integration operator |

0:04:06 | um so our measurement matrix |

0:04:08 | acting on sparse |

0:04:10 | um |

0:04:12 | frequency vectors is given by the product H times T times have where F is the |

0:04:17 | um inverse fourier transform matrix |

0:04:22 | so um and so |

0:04:24 | this |

0:04:26 | this rate that you're taking samples that is much lower than the nyquist rate of the sim of the signal |

0:04:32 | of your input signal |

0:04:33 | but are you really slow or the nyquist |

0:04:36 | so um |

0:04:38 | the key idea here is that this random waveform from generator of the random the modulator |

0:04:42 | requires a a random waveform that switches at the nyquist rate of the input signal |

0:04:48 | and |

0:04:49 | the E |

0:04:50 | um and the reason we want to sample at a rate lower the nyquist is because it is hard to |

0:04:56 | build |

0:04:57 | um |

0:04:58 | high fidelity high rate um and to digital converters |

0:05:02 | because of the um |

0:05:06 | to limit that the capacitors in the circuits place on the time it takes to um change states of that |

0:05:12 | and what to digital converter |

0:05:14 | and so the general rule is that a doubling of the sampling rate um |

0:05:17 | corresponds to a one bit reduction in the |

0:05:20 | resolution of the at C |

0:05:22 | and so generating and since this random waveform is generated from the same capacitors it maybe just as hard |

0:05:29 | to generate a a um fast lease switching |

0:05:33 | um waveform form as it is to build a high speed analog to digital converter |

0:05:40 | and so um uh along the same lines will take and the side back to the days of magnetic recording |

0:05:46 | um and the problem here is that um engineers were trying to write lots of data on these magnetic disks |

0:05:53 | and the fact that |

0:05:55 | uh um your idealise square poles |

0:05:57 | you um can't actually record in practice |

0:06:00 | so what you're left with is this |

0:06:02 | um |

0:06:03 | smooth poles |

0:06:05 | and these smooth pulses |

0:06:07 | have trouble if you try to space them too close together |

0:06:11 | and so that transitions what you're trying to detect in the waveform are given by these altered these peaks are |

0:06:16 | alternating and sign |

0:06:18 | and the ability of your read back mechanism to detect the transitions is compromised |

0:06:24 | um when the |

0:06:25 | peaks are shifted and produced or changed and amplitude |

0:06:30 | and so the challenge for these engineers was how to write data |

0:06:34 | um while keeping the um transitions |

0:06:38 | um separate enough in time so that you limit your intersymbol interference |

0:06:43 | so there's solution was run like limited codes |

0:06:46 | these are codes written as in or as the I data |

0:06:50 | and there |

0:06:51 | parameter by |

0:06:52 | um D in K where D is the minimum number of zeros between |

0:06:57 | um ones these |

0:06:59 | the zeros in your sequence indicate a |

0:07:02 | no transition in your in or you are waveform of the ones |

0:07:05 | represent trend transition |

0:07:08 | um and here K is the um |

0:07:10 | maximum number of zeros between |

0:07:12 | and he want so the maximum time |

0:07:14 | um |

0:07:15 | between transitions |

0:07:17 | and so the |

0:07:18 | um D represents the or or so |

0:07:22 | separation between |

0:07:23 | um transitions and |

0:07:25 | K A and timing recovery which was necessary for these uh magnetic |

0:07:31 | this magnetic recording situation |

0:07:33 | um and so |

0:07:34 | what what we come up with is a rate loss and increase correlations |

0:07:38 | um when we use the |

0:07:40 | D K constraint sequences |

0:07:43 | um but the |

0:07:44 | advantages manages rate gain from |

0:07:46 | from and increase transition time from spacing |

0:07:49 | are data bits |

0:07:50 | more finely than the transitions in the waveform |

0:07:54 | um it's only a an example of an oral a code that was used and um B M just drives |

0:08:00 | is the two seven code |

0:08:01 | and here we see we have a it's a rate one have code |

0:08:04 | but we have a a factor of three rate again |

0:08:07 | um in or minimum transition spacing |

0:08:10 | and so we see a fifty percent gain in real coding density |

0:08:15 | and so we use this same idea |

0:08:17 | and the random demodulator |

0:08:19 | and replace the |

0:08:21 | unconstrained waveform form of uh the random demodulator with the constraint are wave form |

0:08:27 | and so we have a a waveform that switches |

0:08:31 | the or that has a a larger with between transitions |

0:08:35 | um meaning that we can |

0:08:39 | measure are |

0:08:41 | um the nyquist rate of our signal |

0:08:43 | can be higher given a fixed minimum transition with |

0:08:47 | in our waveform |

0:08:50 | but what do these correlations to for to the properties of our round of the modulator |

0:08:56 | um with this again is are measurement matrix five |

0:08:59 | um given by the product H D F |

0:09:02 | and each entry is a a |

0:09:04 | um some of randomly sign |

0:09:07 | um fourier |

0:09:09 | components of the for a matrix |

0:09:12 | and so if we want |

0:09:14 | five the satisfy the R I P |

0:09:16 | then |

0:09:17 | um we want to be a a your i some tree |

0:09:20 | um which is captured in this tape the right here |

0:09:23 | where we're saying that |

0:09:24 | or are |

0:09:25 | um measurement matrix is newly an orthonormal system |

0:09:30 | and so if we look at how good is |

0:09:33 | on an average |

0:09:35 | um measure um and average system |

0:09:37 | given our constraint sequences then we see that |

0:09:41 | we have |

0:09:42 | in expectation we have the identity matrix plus this matrix to |

0:09:47 | which is given right here and you can see it |

0:09:49 | um |

0:09:50 | completely determined by the correlation in our seek one |

0:09:54 | and so you have |

0:09:56 | um and so we've converted this problem in two |

0:09:59 | a problem looking at |

0:10:01 | um in on average anyway |

0:10:03 | at the matrix to |

0:10:04 | and how it performs |

0:10:07 | and are note that this um |

0:10:09 | this function at a right here which will come up later |

0:10:12 | is the inner product between |

0:10:14 | the columns of this matrix H |

0:10:17 | so so um looking at the spectral norm of the matrix to |

0:10:22 | we look at it the gram matrix |

0:10:24 | and so each sure the gram matrix is given by this |

0:10:27 | um which depends on this function F had |

0:10:31 | which we call the windowed spectrum because |

0:10:34 | it is the |

0:10:35 | so it's the spectrum |

0:10:37 | of a are random random sequence |

0:10:40 | which is the um for a transform of the autocorrelation function |

0:10:44 | um but it's multiplied by this |

0:10:46 | um window function |

0:10:49 | but |

0:10:49 | um as W over are in create which W O W is the um size of your |

0:10:56 | input vectors and R as |

0:10:57 | a number of measurements and so is that gets large |

0:11:00 | which is the situation we're looking at this window gets larger |

0:11:04 | and this windowed spectrum can be approximated by |

0:11:08 | the actual spectrum |

0:11:10 | of of the random waveform taking the account the lack of the in B zero term |

0:11:15 | so this minus one right here |

0:11:17 | so our gram matrix becomes a |

0:11:20 | a diagonal matrix with the |

0:11:22 | square of the spectrum |

0:11:25 | on the diagonal and so are |

0:11:28 | um |

0:11:29 | spectral norm |

0:11:30 | of the matrix to is or the worst-case case spectral norm of our matrix delta to is |

0:11:36 | determined by the |

0:11:39 | um |

0:11:41 | the part of our spectrum that is farthest away from one |

0:11:45 | and so if we look at some examples of |

0:11:48 | specific examples of random waveforms we see |

0:11:52 | um for the rounded the module a which uses and um on constrained independent seek once we have a flat |

0:11:57 | spectrum |

0:11:58 | um um which gives us |

0:12:00 | a a |

0:12:01 | um |

0:12:01 | are matrix delta to is um |

0:12:04 | identically zero and so we see that are |

0:12:07 | um the system proof but provides |

0:12:10 | uniform illumination for all sparse input vectors |

0:12:15 | and for the second example we looked at a a repetition coded sequence |

0:12:20 | and so these repetition codes |

0:12:22 | take a a a um and independent ra to a sequence |

0:12:26 | and repeat each entry one time so that every pair of entries is completely dependent |

0:12:32 | um |

0:12:33 | and this is a plot of the spectrum of such as once |

0:12:37 | and you can see that at high frequencies |

0:12:39 | the spectrum rolls off to zero |

0:12:41 | which means that are |

0:12:43 | um spectrum norm of our delta matrix is |

0:12:46 | um |

0:12:47 | because to one |

0:12:49 | and |

0:12:50 | we we will um do not expect that are I P to be satisfied if we use these repetition coded |

0:12:56 | sequences |

0:12:57 | and and all um |

0:12:59 | because of these high frequencies are not well illuminated |

0:13:04 | but so uh third |

0:13:05 | um random way from that we consider |

0:13:08 | are these uh we call in general rll sequences and so they are |

0:13:12 | generated from a markov chain that imposes are |

0:13:16 | um can straight and are K constraint |

0:13:18 | on the um transition with of the waveform |

0:13:22 | uh the autocorrelation of such a a sequence is given by this |

0:13:27 | um which depends on the |

0:13:28 | transition probability matrix of are |

0:13:31 | um markov chain |

0:13:32 | and also the output symbols |

0:13:35 | where um the vector B here is just a a collection of all the outputs of symbols for each state |

0:13:41 | and a a is the vector B point wise multiplied by the |

0:13:45 | um |

0:13:49 | but the stationary distribution |

0:13:51 | of R markov chain |

0:13:53 | and because are um vector B is orthogonal to the all ones vector |

0:13:59 | um we have that the |

0:14:01 | are |

0:14:02 | autocorrelation correlation um decays geometrically at a rate that depends on the second largest eigenvalue of the matrix P |

0:14:11 | so we here in this part we um illustrate the |

0:14:15 | um geometric tk K |

0:14:16 | um of the |

0:14:18 | of the oral sequences |

0:14:20 | and here you can see this group of uh plots right here's for D equals one |

0:14:25 | and these are pure for D equals to so you can see that the |

0:14:29 | lower the value of D the faster the correlation is in the sequel |

0:14:34 | and there's also a slight dependence on K but that's much less pronounced than the dependence on D |

0:14:41 | so here on on this this plot gives an example spectrum for uh D equals one K cost twenty |

0:14:47 | rll sequence |

0:14:49 | and here you can see that |

0:14:51 | the spectrum um rolls off at high frequency |

0:14:54 | but it does not roll off to zero |

0:14:57 | um so the spectral norm of are don't to matrix |

0:15:01 | um |

0:15:02 | will not go to one |

0:15:04 | and we will |

0:15:05 | um have some expectation that that all P is satisfied |

0:15:10 | and for to that you notice here that |

0:15:12 | um the region marked one here um the for a low pass signals |

0:15:17 | um um the spectrum is very close to one |

0:15:20 | and for the high pass signals mark of the two |

0:15:24 | spectrum it's very close to zero |

0:15:26 | and so if |

0:15:27 | we restrict our input signals |

0:15:30 | to a low-pass and high-pass respectively |

0:15:33 | then we would expect to see different performance |

0:15:35 | in reconstruction |

0:15:37 | and so that's what we did |

0:15:39 | these plots show |

0:15:40 | um |

0:15:41 | the probability of reconstruction versus sparsity |

0:15:45 | um for |

0:15:46 | low frequency and high frequency signals and |

0:15:50 | um i should know the these values for band with are normalized so that |

0:15:54 | the both the unconstrained and the constraint sequences |

0:15:58 | have the same um transition with minimum transition with but um |

0:16:03 | or minimum with between transitions in the waveform |

0:16:07 | and so we see that are constrained |

0:16:10 | random demodulator performs better than the random demodulator for low pass signals |

0:16:14 | um but it performs worse for high pass signals |

0:16:19 | so what this |

0:16:19 | tells is is that are modulating sequence can be chosen |

0:16:24 | um based on its spectrum |

0:16:26 | to eliminate different regions of are the of the spectrum of our input signals |

0:16:34 | and finally there's also even if we don't restrict ourselves to a high pass a low pass signals we still |

0:16:40 | see a uh |

0:16:41 | and advantage in the |

0:16:43 | um band with of the signals that we and um acquire and reconstruct |

0:16:49 | so here you see |

0:16:50 | um |

0:16:51 | these are plots for the random demodulator and are constrained random the modulator and if you allow a |

0:16:57 | um a thirteen percent reduction |

0:17:00 | in your allowed sparsity that you can still see a twenty percent gain |

0:17:04 | in the band with your able to you |

0:17:07 | and so there is a trade-off between the sparsity and the |

0:17:10 | a with of your input signals |

0:17:14 | and finally um we note that are |

0:17:17 | so are theoretical results on the are I P |

0:17:20 | um include the this maximum dependence distance in and a random waveform and the matrix to |

0:17:27 | and we see a slight |

0:17:28 | um inflation in are allowed |

0:17:31 | um sampling rate |

0:17:33 | and so the take aways |

0:17:35 | or that |

0:17:36 | um |

0:17:37 | there's is a |

0:17:38 | uh band with increase if you |

0:17:41 | given a restriction on the minimum transition with your waveform |

0:17:44 | um given fidelity constraints |

0:17:47 | and the tradeoff between sparsity and band with |

0:17:49 | and that |

0:17:50 | you can also make your demodulator tunable |

0:17:53 | so based on the spectrum you can |

0:17:56 | um |

0:17:57 | the late different input signals |

0:18:00 | thank you but i'm |

0:18:06 | we should have time for |

0:18:08 | maybe to question maybe you one a half |

0:18:15 | i i in you um simulations are was you mean that you uh you have that perfect |

0:18:21 | uh a binary sequence that the that you you putting it the system so you you you've |

0:18:25 | constrained it but it is still but but sequence could yeah they're still perfect also so |

0:18:31 | didn't you motivation for doing this was the the these uh and the uh as i stand of the magnetic |

0:18:35 | media argument is that these things would be but that's is yeah |

0:18:40 | have have you looked to will the effect is that would be a how we see is that C uh |

0:18:43 | in cat show in in the the fine make sure |

0:18:46 | so we did a look at that but we had trouble |

0:18:50 | finding a way to capture that in the discrete model that we're using |

0:18:55 | so so is that is that gonna |

0:18:57 | as a uh uh a a a a a big problem with |

0:19:00 | using this is uh in in it's |

0:19:03 | um |

0:19:05 | that so we think that use so using these constraints sequences what |

0:19:09 | um read the effect of that is in perfect imperfect as |

0:19:14 | um that was our motivation for using these constraint sequences |

0:19:17 | but i don't know um specifically what the effects of on |

0:19:22 | putting this and the practise |

0:19:27 | you have a time for for equal question |

0:19:35 | um |

0:19:36 | and it was that you shall can you elaborate a little bit |

0:19:38 | how how you all that a discrete |

0:19:41 | in in frequency see or |

0:19:43 | yeah |

0:19:44 | i |

0:19:45 | um |

0:19:47 | i |

0:19:47 | i |

0:19:48 | a |

0:19:49 | okay |

0:19:50 | uh |

0:19:50 | oh |

0:19:52 | then we will |

0:19:54 | vol |

0:19:56 | so |

0:19:57 | the four |

0:19:59 | well |

0:20:01 | um |

0:20:03 | yeah i |

0:20:05 | okay |

0:20:06 | you |

0:20:08 | yeah |

0:20:14 | okay thank you very much |

0:20:15 | i |