0:00:13alright right
0:00:14welcome everybody thanks for coming to my talk uh i'd as already mentioned that's doesn't order performance analysis for distortion
0:00:19structure chorus
0:00:20and it's one of me and and uh for some and her from him and a you know
0:00:24the call
0:00:25now for brief more i probably don't have to tell you that high resolution is interesting for a number of
0:00:30applications
0:00:31uh in particular is pretty type parameter estimation schemes are often used
0:00:35because the very simple the very flexible they can be a that the very settings setting closed form you don't
0:00:39need any peak search or something
0:00:41and they were perform still close to the cramerrao about
0:00:44um
0:00:45and the were based this all this pretty have all "'em" the based on the should than write equation which
0:00:49are a set of were determined equations which are typically solved using just score
0:00:53speakers it's a
0:00:55oh have release is actually suboptimal
0:00:57it is suboptimal because it you know was more is the fact that there are also estimation errors in the
0:01:02subspace itself
0:01:03there is a better technique which is called structure these scores which takes this estimation errors into account and also
0:01:08explicitly exploits the structure of shift write equations
0:01:11just been proposed in ninety seven
0:01:14um so it comes of improve the form
0:01:16a however the problem is that so far to be scored this is pretty was only evaluated uh a using
0:01:21the simulations so there is no analytical statement for when does it to for better how much so an analytical
0:01:27performance evaluation is of course pretty is the a desirable
0:01:30um the goal here
0:01:32this paper was to apply to
0:01:33seem frame we used
0:01:35if only had to analyse a these scores based pretty
0:01:38which is this paper by a by car gives the
0:01:40a a or expansion of of the signal subspace
0:01:44and i'm in order to analyse a structure discourse basis pretty
0:01:47we use the same frame of before to analyse these based pretty the corresponding graph
0:01:51solicit down here
0:01:53a so we analyzed various versions like standard is pretty unitary esprit read or is pretty in and more you
0:01:57can even do would for non circular esprit in and others
0:02:00but no those are based on these scores
0:02:02oh the purpose you was to extend the no to incorporate structured
0:02:06uh which brings me to outline of their the the big one iteration we had now i'll go to would
0:02:11be for usual on you again the shift invariance equations
0:02:13uh and and what is to actually scores mean showing you the concept of the first-order perturbation for the S
0:02:18P which you might find interesting to use also in other fields
0:02:21and and then our earlier phones results for these process pretty in and the main part of focus on structure
0:02:26these scores of to the solution and then you know as
0:02:28and in some simulation results
0:02:30so let's start of the review
0:02:32a what is what is shift invariance is pretty based on the fact that you can divide the rate into
0:02:36two identical sub arrays to one and J two which of their of the same observations except for phase shift
0:02:43but which is encoded in this musical all spatial frequency
0:02:46and a spatial frequencies link to your parameters of interest for instance if you want to do direction of arrival
0:02:51name to the direction of arrival and T done a simple relation
0:02:54now these matrices you you you can use selection make J one and J two which operate on your race
0:02:59during vector a to select the of the first and the second sub array
0:03:03and shift them very close to you get the same observation except for a face should which contains the problem
0:03:07of interest
0:03:08does is for a single source for multiple source
0:03:11these J one and J two they operate on the race hearing matrix a which contains all the raised and
0:03:15vectors
0:03:16and you parameters is of interest are actually five
0:03:18right right these contains spatial free
0:03:21no this is of the right equation a matrix equation the problem is the array steering matrix of course and
0:03:25no
0:03:26so to get rid of the unknown array steering matrix
0:03:29what we do is we take our observations let's say X is the is a is a matrix which contains
0:03:33and sit C but observations of or and and sensors
0:03:36and we just computed as P D in from the svd P get an estimate for the signal subspace which
0:03:41are the D don't the left single vector
0:03:43and then you was the fact that the race during matrix and this made use S and the same column
0:03:48space approximately because there is noise
0:03:50so they are related via a transfer matrix T
0:03:52and this can be used to eliminate right some we are actually have a shift invariance equations which need to
0:03:57be sold pop i that's the only known
0:04:00you has we have an estimate for
0:04:01and i the eigen base so i are are are do once correct range
0:04:05right this is the basis for is pretty just a very quick review
0:04:08the main point here is that these of the shift write equations we need to solve the are determined
0:04:12and we have an estimate for the subspace but it's not accurate
0:04:15right
0:04:15so how do we sell
0:04:17typically people sold just using these were
0:04:19and these score just mean to minimize the least squares fit between the left and the right hand side of
0:04:23the equation right only subject to side
0:04:25uh this gives a very close from solution a very simple one you can use the inverse
0:04:29but the problem is you we more the fact that you don't know exactly us that is you actually implicitly
0:04:34that it's perfectly known which is not true we know that there is in there
0:04:38and that's the idea of structure these chorus
0:04:40such each score as change this cost function what is to change here
0:04:43a change to incorporate for each of these occurrences of the subspace bayes us an additional they'll tell us which
0:04:48explicitly models the fact that we have an estimation error in the subspace that we tried to correct so that
0:04:54the two sides of the question a line in a better way
0:04:56and in an in a group regularization term is to make sure that this up would doing a small to
0:05:01penalise to lot of
0:05:03this is the cost function for structure these scores
0:05:05the nice thing is that takes into account errors and subspace
0:05:08but draw a always done now not only in your course anymore but it's quadratic
0:05:12so we typically so it iteratively via a local linearization
0:05:15but it has been shown that only very few iterations are required actually in the high snr regime only one
0:05:20iteration is required so you can see just as a as a correct
0:05:24alright right
0:05:24now to to to come to the performance analysis for this we need to look into this
0:05:28source of error that actually here the source of average is here is this yeah to yes this error and
0:05:33the signal subspace we need to analytically grass
0:05:36and the the frame up we using for this is the one that by but kind of which i just
0:05:40briefly you want to review was also very simple
0:05:42you take a look at you and vector or uh observations X not without any noise where you have you
0:05:47true signal subspace and a to noise subspace if you break again the S
0:05:50in the presence of noise you only have an estimate so you can say that your estimate signal subspace is
0:05:55the true one class and error come down
0:05:57in this trying to find
0:05:58and for this
0:05:59for this error and you can always expand into one part which lies in the model bass and one part
0:06:04which lies in the signal subspace this just because it leads and this N dimensional space so you can always
0:06:08break it into but two space
0:06:10and the interpretation of the first component
0:06:13it's a a of the signal subspace which is in the noise subspace would really model models how much of
0:06:17the noise leads into the thing as it's how the subspace itself
0:06:20a raw whereas the second one is our of the signal subspace inside the signal
0:06:25this one models how the individual singular vectors inside the signal subspace the particular basis we choose how was the
0:06:30trip right
0:06:31so obviously the second one plays no role for esprit because the particular basis because the relevant only the first
0:06:37one
0:06:38but extensions exist
0:06:40we only use the first term because the second one's a relevant if you want you can easily incorporate the
0:06:44second one as well
0:06:45has been proposed by a car and a you can see it's a very similar expressions of first order expansion
0:06:50in the noise and which also perturbation
0:06:52and the second one it's been proposed actually minor college
0:06:55um but as i said we don't need for this work
0:06:58um also the colour already ninety three has used this result to analyse standard esprit in is given a first
0:07:03order expansion for the estimation error indicates spatial frequency using standard esprit which is the simple expression
0:07:09based on this work with expanded it
0:07:11um we've be shown that a last year at i cast in in dallas that you can instance perform statistical
0:07:16expectation of this
0:07:17assuming white complex gaussian noise
0:07:19so what it should probably emphasise that this framework
0:07:22its asymptotic
0:07:23in the effective snr so you don't need a large number of snapshots of something
0:07:28uh you you can have a single snapshot if you want as long as the variance is low and it
0:07:31needs no a particular assumptions about the statistics you don't need gaussian of the noise you don't even need a
0:07:37gaussian of the symbols you just need the perturbation to be small
0:07:40but if you as young gas and then you can of course conforms forms that exceed the expectation and you
0:07:43get needs better
0:07:45this is lisa and of this read this is nice unitary we and you can do more
0:07:48these a previous results we shown and based on these now we try to expand them to incorporate structure
0:07:54so
0:07:55um what is done here is we first check our extension attention to a special piece of stuff to these
0:07:59scores which is using a single iteration
0:08:01and it is not using any regularization
0:08:03the reason for this is that these assumptions are asymptotically optimal for structured these in the high snr regime
0:08:09and since the performance and as we do use anyways asymptotic high as are it's it's fine to assume this
0:08:14right "'cause" it
0:08:15but it but it asymptotically in you're actually
0:08:17right
0:08:18uh i'm of these assumptions you can express the cost function and the solution and in a very simple way
0:08:22you can say that these solutions sites for structure these is equal to the initial solution get "'em" by this
0:08:27first plus an update or
0:08:29and is a a term is the solution of this cost function
0:08:32which is of course quadratic "'cause" i've said
0:08:34there is there is a a a a a term image does not depend on a there is a link
0:08:37at time and then there is a quadratic term right
0:08:39the quadratic term be black that's the linearization so we are back to a lean this squares problem
0:08:44and this mean of this problem has of course of very simple solution
0:08:47for for us uh to be suppressed so this will be the update for me to in this would be
0:08:51the update for the for the subspace if if you would wanna do the second iteration be actually don't need
0:08:56the second
0:08:56since we only going one
0:08:58so that the main message here is that it actually the up they can be explicitly computed as taking this
0:09:02vector are are last
0:09:04which is the vectorized version of the residual matrix after doing least
0:09:08a be multiplied by that to them as of this matrix F
0:09:11which is the linear mapping me
0:09:12and for this we have to find of a first-order expansion
0:09:15how have we done it we done it by looking at both terms individually
0:09:19we start with a
0:09:19first that's of most of that matrix F
0:09:22what we will the set you can express this matrix a had as equal to and matrix a matrix they'll
0:09:28have
0:09:29but the matrix F is constant in the sense that that does not depend on the perturbation itself self right
0:09:34so if you look at one realisation look at the random of some of perturbation
0:09:37this some will be constant and this one will be linear right and this gives the matrix F
0:09:42and therefore if we look at it so inverse since this part is zero mean it's not very hard to
0:09:46see a that in versus actually equal to the sort inverse of this constant matrix independent of the perturbation
0:09:51loss a linear term that's a quadratic term plus i wouldn't for
0:09:54right we don't actually physically need to spend it it's fine to know that
0:09:58this constant term as we will see the in terms actually not need and this is simplified and is greatly
0:10:02and for the second term from those are last
0:10:05uh it's not difficult to see that this can be written as a a a a at the linux pension
0:10:09for the again
0:10:10uh the error in the subspace they are U S be multiplied by one matrix mapping the subspace you to
0:10:15the residual are possible direct term which be more
0:10:18and uh also this
0:10:20a a the subspace is the result of a car
0:10:22has a mean expansion in terms of the actual perturbation them noise
0:10:25again but forming that a matrix plus a body matrix plus i don't for
0:10:30now the collect both
0:10:31if we collect um this and this together we see that the uh a vector all has a lean expansion
0:10:36and the noise
0:10:37and now we combine these two results back into the original expression
0:10:41we can see that if you multiply the um and this one out them be get a linear term which
0:10:46is this constant term times the leading it
0:10:48plus a quadratic term just as linear times the the linear to right so the quadratic term again be like
0:10:54that's
0:10:54that's first order
0:10:55so this shows that we don't actually need even the lean expansion of them soon
0:10:59right
0:10:59and then we get this very simple result
0:11:01a which is pretty into it if you start of the noise you have one a mapping matrix from the
0:11:05noise to the subspace that from the stuff to the residuals and then this matrix here
0:11:09this is to the vector and selling and us
0:11:11but as is that are only interested in this up part
0:11:13so the final result for this upper part is actually this one again
0:11:17pretty intuitive meaning about mapping
0:11:19and uh then you can also like in of back to the original expansion of the estimation error you find
0:11:23a first or expansion of the estimation error of spatial frequencies again
0:11:27it's a very simple and it has a form just again in its structure very simple to but these press
0:11:32expansion be shown previously
0:11:34are we have a me this vector are L as now it's R S L a slightly different but the
0:11:37form still the same
0:11:39and again you can if you want a for means mean
0:11:42um
0:11:43the the mean square error if you assume as your mean and circle is much white noise you actually don't
0:11:47you gals in for this
0:11:48and you get a needs got are again very compact and and simplex spray
0:11:51now but is all this good for why do we do a why do we go through this analytical result
0:11:55and what does it show us now that we have the result
0:11:57the break
0:11:58think this is good for is if you look at
0:12:00that's
0:12:00specific case we can simplify the expression so much that you actually gain inside
0:12:05in to what is the performance of the schemes of the very specific set
0:12:09and to to add the this point i brought one example you which is not the paper due to like
0:12:13of space but i still i think it's interesting
0:12:15to kind of push for what of it that this we has has applications
0:12:18and the example is of course the simple one you can think of which is a single source
0:12:22if you consider a single sauce
0:12:24and be shown the uh that fully score space is pretty the means error as a break
0:12:28compact expression if you consider a uniformly a rate of N sensors
0:12:32um it has to be effective as an front and then it's quadratic and one over one over and basically
0:12:37then the common or bound also as a very simple expression
0:12:40which means that you can find the asymptotic efficiency again asymptotic effective snr it still it's about for single snapshot
0:12:46um is is given by this expression so you have a closed form expression for the asymptotic efficiency only depending
0:12:51on and
0:12:52in it's exact except for the fact that it doesn't talk
0:12:55um so and and what you can see you is that it basically it's start of one for two and
0:12:59three sensors and then it goes down
0:13:01so these press based pretty is not efficient for large race on a single song
0:13:05we did the same number of the structure these scores
0:13:07and after a number of manipulations we found again a closed-form expression for the mean square
0:13:12it's a bit more involved but you can do the same thing you can not of like from are bound
0:13:15i mean square error and you find a close form expression for the asymptotic efficiency which is
0:13:20a ratio of order only us
0:13:22interestingly the first three coefficients agree then they start to diff
0:13:25if you plot is on the same thing as a things you and one it looks like it's almost equal
0:13:29to one but is actually not if you zoom in a little bit you can see it starts of one
0:13:33it goes down a little bit and then it goes back out
0:13:35we don't really have physical explanation for that mathematically you could prove that it is that with simulations you can
0:13:40verify that like
0:13:42and you really you have the values for the asymptotic efficiency as as exact number so this is a pretty
0:13:46value result
0:13:47it would be interesting to extend this to two sources to see was the performance in terms of separation correlation
0:13:52and these real or
0:13:53uh a parameter stuff
0:13:55alright right
0:13:55a just a few must simulations and the simulations will be compared is is we compare the empirical error
0:14:00which you get by actually performing as free on random data and then computing the estimate the spatial frequencies computing
0:14:06the arrow an averaging it
0:14:08with the semi analytical results which still depend on the noise realisations of the average these or noise
0:14:13and a fully analytical results which which for which no simulations actually needed on the one to colour everything is
0:14:19needed because there are around of that
0:14:20right
0:14:21uh a this first example he for uncorrelated sources
0:14:24you know the the performance of unitary briefly specified
0:14:28sort
0:14:28these grass actually scores
0:14:30is very close so it's of course interesting to see can this really small gap still be reliably predicted
0:14:35well the S is yes
0:14:37with our analytical results we become asymptotically optimal yeah the same small yeah
0:14:41the semi analytical this
0:14:42the volume
0:14:43uh another the result this is pretty source which are very strongly correlated there zero point ninety nine core between
0:14:48any pair of soft
0:14:49i know we have four as we have to us and standard this spree based on these press actually scores
0:14:54unitary esprit based on these cost of is discourse
0:14:56to do this time correlation or is a big gas so you can more clearly distinguished the curve
0:15:01again this is the i mean uh the results to become
0:15:03accurate for high snr this
0:15:05this
0:15:05the form you
0:15:06and then use the single source now for single source we have an improvement if you plot of versus the
0:15:10snr
0:15:11for for eight sensors between these grass structure course
0:15:14and again the analytical results
0:15:17but range to the conclusions
0:15:18what we present a is a first order
0:15:20a a perturbation analysis to actually score is pretty
0:15:23just based on the performance analysis for the S D which is a very nice concept that can be used
0:15:27also in different field
0:15:29the nice thing about it its asymptotic and the effective snr so i a small noise variance
0:15:33well a large and
0:15:34can be both for whatever you want
0:15:36and it explicit which means you don't need any assumptions about the statistics you need to noise to be zero
0:15:41mean but you don't need to source to be gaussian you the noise to be C
0:15:44you just need to be small
0:15:46and is also shown means but our assuming zero-mean circular is met guide noise
0:15:50and now also shown the explicit expressions for single source where you can actually see just talking
0:15:55this concludes my talk things
0:16:03we have time for question
0:16:14yeah
0:16:15there is a relation but there is also a difference
0:16:18um
0:16:19okay here
0:16:20in an in totally scores you allow for an error
0:16:23in in in this expression in you mean mapping matrices that's say
0:16:27but you assume that there as are independent errors on the left and the right hand side of spray
0:16:32at that's why this is called structure these course totally suppressed would model to independent errors for the left and
0:16:37the right and say the different
0:16:38i actually there is a structure in the shift writes equations which tells you that these are actually not and
0:16:42pain and are almost the same except little selection matrices
0:16:45and the structure should be incorporated the solution
0:16:51yeah
0:16:52yeah
0:16:54right
0:16:55this only is yeah
0:16:56yeah
0:17:16it doesn't have to be we don't can find that that that you explicitly to be unit
0:17:20it could be not
0:17:25well
0:17:25you for for is pretty you don't need a constraint that does unitary mean
0:17:29this you can you to any subspace at you one doesn't have to be you know
0:17:39but you can you can describe a sub using any is any base equivalent typically you the S D because
0:17:44it gives you know from all the basis and it's nice to work with and it's simple
0:17:48but you could use any other basis and it would have no inter
0:17:51form
0:17:51any base
0:17:52fine
0:17:53actually me started rolling taxes pretty the first version
0:17:56i'll be defined of subspace it was long units are you but then when be corrected it we met every
0:18:00had no impact on the performance which is
0:18:02just would be
0:18:03yeah but just
0:18:06us us this to be unitary domain
0:18:10if you had last yeah i
0:18:12not so mean unit
0:18:13mean
0:18:14so can you
0:18:15the
0:18:16i you know a
0:18:17to to issue
0:18:18you would like to minimize
0:18:20but which
0:18:21know
0:18:22you would still operate
0:18:23you can men
0:18:24C
0:18:25yeah
0:18:27you could do would but i one
0:18:28you you what what would be the uh the data
0:18:30i mean we don't need to same unitary T four
0:18:34the writing is pretty for using it
0:18:37but it would be possible
0:18:40i you you have question
0:18:43a some thought of a set of goes to infinity so a set of the not so sure
0:18:50i
0:18:50i
0:18:51so what we actually need
0:18:52let me go to
0:18:55well
0:18:57what we need is that this term and frame here that it they can zero we and P D which
0:19:01is the power of sauce of single and which is the power of your
0:19:04um and the the noise variance and you and which is the number of snapshots
0:19:08so if you have a finite snr and you that in going to infinity works just exactly the same way
0:19:13i i you let and go to infinity or you get the noise variance go zero all
0:19:17a source power go to
0:19:18in
0:19:27yes it single source only for singles
0:19:30it's not as bad for for multiple
0:19:34it was a prize one as what for the first time but i can very fight using selection
0:19:37so
0:19:41it is surprising because he the low resolution techniques are asymptotically optimal for single source but it's is not
0:19:53when when that more sources is it it's not as that it's it's very hard to find as expressions explicitly
0:19:57for most source because the number of terms get
0:19:59it's very large
0:20:00very to use we try simplify simplified
0:20:02we we we actually didn't
0:20:03get the final result but from from simulations my experience is that
0:20:06this kind of the the risk case in terms of comparing it with rubber
0:20:12it also disappear of course if you replace quest squares fast a score
0:20:15which may serve of the spec
0:20:18and you just it's just one correction term it's not and it to iterative procedure to apply already of a
0:20:23single iteration it basically disappear
0:20:26in it's it's something simple it's holding one set of
0:20:29when
0:20:34a
0:20:34oh yeah
0:20:35the question
0:20:36it seems that uh stuff to the squares
0:20:39but and and these squares
0:20:40especially when the source is a card
0:20:43right
0:20:44so okay translate translated
0:20:46a
0:20:47so
0:20:48estimation use
0:20:49uh
0:20:50is words
0:20:51when the sources are quite
0:20:53is better one one
0:20:55because this is main
0:20:57to my
0:20:58a problem is that
0:21:00there is not always a one-to-one mapping between
0:21:02a better self space and a better performance of this pretty
0:21:05but but start of a senses as for single source we got a better subspace but the the the the
0:21:09means but our of the speech that's both exact
0:21:12sometimes times you get you get a better subspace but it does not help Q in terms of your you
0:21:16mean square
0:21:17i would say that there should probably be be something but i don't know if it's the weak link
0:21:21a strong so this something
0:21:22still have
0:21:26which
0:21:27so let's