alright right welcome everybody thanks for coming to my talk uh i'd as already mentioned that's doesn't order performance analysis for distortion structure chorus and it's one of me and and uh for some and her from him and a you know the call now for brief more i probably don't have to tell you that high resolution is interesting for a number of applications uh in particular is pretty type parameter estimation schemes are often used because the very simple the very flexible they can be a that the very settings setting closed form you don't need any peak search or something and they were perform still close to the cramerrao about um and the were based this all this pretty have all "'em" the based on the should than write equation which are a set of were determined equations which are typically solved using just score speakers it's a oh have release is actually suboptimal it is suboptimal because it you know was more is the fact that there are also estimation errors in the subspace itself there is a better technique which is called structure these scores which takes this estimation errors into account and also explicitly exploits the structure of shift write equations just been proposed in ninety seven um so it comes of improve the form a however the problem is that so far to be scored this is pretty was only evaluated uh a using the simulations so there is no analytical statement for when does it to for better how much so an analytical performance evaluation is of course pretty is the a desirable um the goal here this paper was to apply to seem frame we used if only had to analyse a these scores based pretty which is this paper by a by car gives the a a or expansion of of the signal subspace and i'm in order to analyse a structure discourse basis pretty we use the same frame of before to analyse these based pretty the corresponding graph solicit down here a so we analyzed various versions like standard is pretty unitary esprit read or is pretty in and more you can even do would for non circular esprit in and others but no those are based on these scores oh the purpose you was to extend the no to incorporate structured uh which brings me to outline of their the the big one iteration we had now i'll go to would be for usual on you again the shift invariance equations uh and and what is to actually scores mean showing you the concept of the first-order perturbation for the S P which you might find interesting to use also in other fields and and then our earlier phones results for these process pretty in and the main part of focus on structure these scores of to the solution and then you know as and in some simulation results so let's start of the review a what is what is shift invariance is pretty based on the fact that you can divide the rate into two identical sub arrays to one and J two which of their of the same observations except for phase shift but which is encoded in this musical all spatial frequency and a spatial frequencies link to your parameters of interest for instance if you want to do direction of arrival name to the direction of arrival and T done a simple relation now these matrices you you you can use selection make J one and J two which operate on your race during vector a to select the of the first and the second sub array and shift them very close to you get the same observation except for a face should which contains the problem of interest does is for a single source for multiple source these J one and J two they operate on the race hearing matrix a which contains all the raised and vectors and you parameters is of interest are actually five right right these contains spatial free no this is of the right equation a matrix equation the problem is the array steering matrix of course and no so to get rid of the unknown array steering matrix what we do is we take our observations let's say X is the is a is a matrix which contains and sit C but observations of or and and sensors and we just computed as P D in from the svd P get an estimate for the signal subspace which are the D don't the left single vector and then you was the fact that the race during matrix and this made use S and the same column space approximately because there is noise so they are related via a transfer matrix T and this can be used to eliminate right some we are actually have a shift invariance equations which need to be sold pop i that's the only known you has we have an estimate for and i the eigen base so i are are are do once correct range right this is the basis for is pretty just a very quick review the main point here is that these of the shift write equations we need to solve the are determined and we have an estimate for the subspace but it's not accurate right so how do we sell typically people sold just using these were and these score just mean to minimize the least squares fit between the left and the right hand side of the equation right only subject to side uh this gives a very close from solution a very simple one you can use the inverse but the problem is you we more the fact that you don't know exactly us that is you actually implicitly that it's perfectly known which is not true we know that there is in there and that's the idea of structure these chorus such each score as change this cost function what is to change here a change to incorporate for each of these occurrences of the subspace bayes us an additional they'll tell us which explicitly models the fact that we have an estimation error in the subspace that we tried to correct so that the two sides of the question a line in a better way and in an in a group regularization term is to make sure that this up would doing a small to penalise to lot of this is the cost function for structure these scores the nice thing is that takes into account errors and subspace but draw a always done now not only in your course anymore but it's quadratic so we typically so it iteratively via a local linearization but it has been shown that only very few iterations are required actually in the high snr regime only one iteration is required so you can see just as a as a correct alright right now to to to come to the performance analysis for this we need to look into this source of error that actually here the source of average is here is this yeah to yes this error and the signal subspace we need to analytically grass and the the frame up we using for this is the one that by but kind of which i just briefly you want to review was also very simple you take a look at you and vector or uh observations X not without any noise where you have you true signal subspace and a to noise subspace if you break again the S in the presence of noise you only have an estimate so you can say that your estimate signal subspace is the true one class and error come down in this trying to find and for this for this error and you can always expand into one part which lies in the model bass and one part which lies in the signal subspace this just because it leads and this N dimensional space so you can always break it into but two space and the interpretation of the first component it's a a of the signal subspace which is in the noise subspace would really model models how much of the noise leads into the thing as it's how the subspace itself a raw whereas the second one is our of the signal subspace inside the signal this one models how the individual singular vectors inside the signal subspace the particular basis we choose how was the trip right so obviously the second one plays no role for esprit because the particular basis because the relevant only the first one but extensions exist we only use the first term because the second one's a relevant if you want you can easily incorporate the second one as well has been proposed by a car and a you can see it's a very similar expressions of first order expansion in the noise and which also perturbation and the second one it's been proposed actually minor college um but as i said we don't need for this work um also the colour already ninety three has used this result to analyse standard esprit in is given a first order expansion for the estimation error indicates spatial frequency using standard esprit which is the simple expression based on this work with expanded it um we've be shown that a last year at i cast in in dallas that you can instance perform statistical expectation of this assuming white complex gaussian noise so what it should probably emphasise that this framework its asymptotic in the effective snr so you don't need a large number of snapshots of something uh you you can have a single snapshot if you want as long as the variance is low and it needs no a particular assumptions about the statistics you don't need gaussian of the noise you don't even need a gaussian of the symbols you just need the perturbation to be small but if you as young gas and then you can of course conforms forms that exceed the expectation and you get needs better this is lisa and of this read this is nice unitary we and you can do more these a previous results we shown and based on these now we try to expand them to incorporate structure so um what is done here is we first check our extension attention to a special piece of stuff to these scores which is using a single iteration and it is not using any regularization the reason for this is that these assumptions are asymptotically optimal for structured these in the high snr regime and since the performance and as we do use anyways asymptotic high as are it's it's fine to assume this right "'cause" it but it but it asymptotically in you're actually right uh i'm of these assumptions you can express the cost function and the solution and in a very simple way you can say that these solutions sites for structure these is equal to the initial solution get "'em" by this first plus an update or and is a a term is the solution of this cost function which is of course quadratic "'cause" i've said there is there is a a a a a term image does not depend on a there is a link at time and then there is a quadratic term right the quadratic term be black that's the linearization so we are back to a lean this squares problem and this mean of this problem has of course of very simple solution for for us uh to be suppressed so this will be the update for me to in this would be the update for the for the subspace if if you would wanna do the second iteration be actually don't need the second since we only going one so that the main message here is that it actually the up they can be explicitly computed as taking this vector are are last which is the vectorized version of the residual matrix after doing least a be multiplied by that to them as of this matrix F which is the linear mapping me and for this we have to find of a first-order expansion how have we done it we done it by looking at both terms individually we start with a first that's of most of that matrix F what we will the set you can express this matrix a had as equal to and matrix a matrix they'll have but the matrix F is constant in the sense that that does not depend on the perturbation itself self right so if you look at one realisation look at the random of some of perturbation this some will be constant and this one will be linear right and this gives the matrix F and therefore if we look at it so inverse since this part is zero mean it's not very hard to see a that in versus actually equal to the sort inverse of this constant matrix independent of the perturbation loss a linear term that's a quadratic term plus i wouldn't for right we don't actually physically need to spend it it's fine to know that this constant term as we will see the in terms actually not need and this is simplified and is greatly and for the second term from those are last uh it's not difficult to see that this can be written as a a a a at the linux pension for the again uh the error in the subspace they are U S be multiplied by one matrix mapping the subspace you to the residual are possible direct term which be more and uh also this a a the subspace is the result of a car has a mean expansion in terms of the actual perturbation them noise again but forming that a matrix plus a body matrix plus i don't for now the collect both if we collect um this and this together we see that the uh a vector all has a lean expansion and the noise and now we combine these two results back into the original expression we can see that if you multiply the um and this one out them be get a linear term which is this constant term times the leading it plus a quadratic term just as linear times the the linear to right so the quadratic term again be like that's that's first order so this shows that we don't actually need even the lean expansion of them soon right and then we get this very simple result a which is pretty into it if you start of the noise you have one a mapping matrix from the noise to the subspace that from the stuff to the residuals and then this matrix here this is to the vector and selling and us but as is that are only interested in this up part so the final result for this upper part is actually this one again pretty intuitive meaning about mapping and uh then you can also like in of back to the original expansion of the estimation error you find a first or expansion of the estimation error of spatial frequencies again it's a very simple and it has a form just again in its structure very simple to but these press expansion be shown previously are we have a me this vector are L as now it's R S L a slightly different but the form still the same and again you can if you want a for means mean um the the mean square error if you assume as your mean and circle is much white noise you actually don't you gals in for this and you get a needs got are again very compact and and simplex spray now but is all this good for why do we do a why do we go through this analytical result and what does it show us now that we have the result the break think this is good for is if you look at that's specific case we can simplify the expression so much that you actually gain inside in to what is the performance of the schemes of the very specific set and to to add the this point i brought one example you which is not the paper due to like of space but i still i think it's interesting to kind of push for what of it that this we has has applications and the example is of course the simple one you can think of which is a single source if you consider a single sauce and be shown the uh that fully score space is pretty the means error as a break compact expression if you consider a uniformly a rate of N sensors um it has to be effective as an front and then it's quadratic and one over one over and basically then the common or bound also as a very simple expression which means that you can find the asymptotic efficiency again asymptotic effective snr it still it's about for single snapshot um is is given by this expression so you have a closed form expression for the asymptotic efficiency only depending on and in it's exact except for the fact that it doesn't talk um so and and what you can see you is that it basically it's start of one for two and three sensors and then it goes down so these press based pretty is not efficient for large race on a single song we did the same number of the structure these scores and after a number of manipulations we found again a closed-form expression for the mean square it's a bit more involved but you can do the same thing you can not of like from are bound i mean square error and you find a close form expression for the asymptotic efficiency which is a ratio of order only us interestingly the first three coefficients agree then they start to diff if you plot is on the same thing as a things you and one it looks like it's almost equal to one but is actually not if you zoom in a little bit you can see it starts of one it goes down a little bit and then it goes back out we don't really have physical explanation for that mathematically you could prove that it is that with simulations you can verify that like and you really you have the values for the asymptotic efficiency as as exact number so this is a pretty value result it would be interesting to extend this to two sources to see was the performance in terms of separation correlation and these real or uh a parameter stuff alright right a just a few must simulations and the simulations will be compared is is we compare the empirical error which you get by actually performing as free on random data and then computing the estimate the spatial frequencies computing the arrow an averaging it with the semi analytical results which still depend on the noise realisations of the average these or noise and a fully analytical results which which for which no simulations actually needed on the one to colour everything is needed because there are around of that right uh a this first example he for uncorrelated sources you know the the performance of unitary briefly specified sort these grass actually scores is very close so it's of course interesting to see can this really small gap still be reliably predicted well the S is yes with our analytical results we become asymptotically optimal yeah the same small yeah the semi analytical this the volume uh another the result this is pretty source which are very strongly correlated there zero point ninety nine core between any pair of soft i know we have four as we have to us and standard this spree based on these press actually scores unitary esprit based on these cost of is discourse to do this time correlation or is a big gas so you can more clearly distinguished the curve again this is the i mean uh the results to become accurate for high snr this this the form you and then use the single source now for single source we have an improvement if you plot of versus the snr for for eight sensors between these grass structure course and again the analytical results but range to the conclusions what we present a is a first order a a perturbation analysis to actually score is pretty just based on the performance analysis for the S D which is a very nice concept that can be used also in different field the nice thing about it its asymptotic and the effective snr so i a small noise variance well a large and can be both for whatever you want and it explicit which means you don't need any assumptions about the statistics you need to noise to be zero mean but you don't need to source to be gaussian you the noise to be C you just need to be small and is also shown means but our assuming zero-mean circular is met guide noise and now also shown the explicit expressions for single source where you can actually see just talking this concludes my talk things we have time for question yeah there is a relation but there is also a difference um okay here in an in totally scores you allow for an error in in in this expression in you mean mapping matrices that's say but you assume that there as are independent errors on the left and the right hand side of spray at that's why this is called structure these course totally suppressed would model to independent errors for the left and the right and say the different i actually there is a structure in the shift writes equations which tells you that these are actually not and pain and are almost the same except little selection matrices and the structure should be incorporated the solution yeah yeah right this only is yeah yeah it doesn't have to be we don't can find that that that you explicitly to be unit it could be not well you for for is pretty you don't need a constraint that does unitary mean this you can you to any subspace at you one doesn't have to be you know but you can you can describe a sub using any is any base equivalent typically you the S D because it gives you know from all the basis and it's nice to work with and it's simple but you could use any other basis and it would have no inter form any base fine actually me started rolling taxes pretty the first version i'll be defined of subspace it was long units are you but then when be corrected it we met every had no impact on the performance which is just would be yeah but just us us this to be unitary domain if you had last yeah i not so mean unit mean so can you the i you know a to to issue you would like to minimize but which know you would still operate you can men C yeah you could do would but i one you you what what would be the uh the data i mean we don't need to same unitary T four the writing is pretty for using it but it would be possible i you you have question a some thought of a set of goes to infinity so a set of the not so sure i i so what we actually need let me go to well what we need is that this term and frame here that it they can zero we and P D which is the power of sauce of single and which is the power of your um and the the noise variance and you and which is the number of snapshots so if you have a finite snr and you that in going to infinity works just exactly the same way i i you let and go to infinity or you get the noise variance go zero all a source power go to in yes it single source only for singles it's not as bad for for multiple it was a prize one as what for the first time but i can very fight using selection so it is surprising because he the low resolution techniques are asymptotically optimal for single source but it's is not when when that more sources is it it's not as that it's it's very hard to find as expressions explicitly for most source because the number of terms get it's very large very to use we try simplify simplified we we we actually didn't get the final result but from from simulations my experience is that this kind of the the risk case in terms of comparing it with rubber it also disappear of course if you replace quest squares fast a score which may serve of the spec and you just it's just one correction term it's not and it to iterative procedure to apply already of a single iteration it basically disappear in it's it's something simple it's holding one set of when a oh yeah the question it seems that uh stuff to the squares but and and these squares especially when the source is a card right so okay translate translated a so estimation use uh is words when the sources are quite is better one one because this is main to my a problem is that there is not always a one-to-one mapping between a better self space and a better performance of this pretty but but start of a senses as for single source we got a better subspace but the the the the means but our of the speech that's both exact sometimes times you get you get a better subspace but it does not help Q in terms of your you mean square i would say that there should probably be be something but i don't know if it's the weak link a strong so this something still have which so let's