0:00:13i was speak
0:00:18i just thinking um i mean my first conference presentation
0:00:21every gave a member was a our ten a member of like
0:00:25i spend the whole session
0:00:27hammering like all the speakers of questions of my presentation was that the and
0:00:30course after after my presentation they'll hammered me together so
0:00:34so my plan is i'm an speak the whole time go five minutes extra low we no time for questions
0:00:38and the will i be very quickly
0:00:42it was so i i guess we're gonna get started here i mean you know a B being a professor
0:00:46i can basically fill and whatever time is required i have like fifty back up so i i can do
0:00:51this and an hour a can do than twenty minutes a do five
0:00:54um so this is this is joint work with my P H D student to count you know we he's
0:00:57a national instruments now
0:00:59and uh
0:01:01work that he did for P H T actually about a year ago
0:01:05something about one students graduated becomes a lot harder to get them to actually submit the work on time for
0:01:10some reason
0:01:11uh so anyway as time but to be here presenting a here so the the title is many and predictive
0:01:16coding for limited feedback multiuser mimo system
0:01:19and i'm glad about that the especially the previous presentation
0:01:23because uh it gives me a nice introduction that i was expecting to get in this uh sessions i don't
0:01:28have to tell you what is multiuser mimo set
0:01:30so the the system that i'm in a consider in this talk is
0:01:33so multiuser user mimo communication system with limited feedback
0:01:36um i'm going to make
0:01:38the uh
0:01:39the main assumption that
0:01:41i i guess we just found on the previous talk wasn't good which is i'm assume the that the quality
0:01:45information is perfect
0:01:46and i'm to focus on quantizing
0:01:49the uh direction information and so you the multi jim i'm most set of is
0:01:53uh we're gonna do in a in a case and do zero forcing precoding at the transmitter we have a
0:01:57single receive even of the objective at
0:01:59each user is we're gonna quantized that direction vector we're gonna send a back of the limit feedback laying
0:02:04we to put all those together or design trans the beam formers and
0:02:08everything works just like in that the previous talk
0:02:11and uh that so this is a set of that we're considering
0:02:14and uh the basic
0:02:16uh challenge here
0:02:17is as follows here so if if you been following the the field for a long time you know that
0:02:21you know we started off doing this was single user beamforming
0:02:24um we were looking at
0:02:26codebooks books on the order of three four five six bits for up to four antennas
0:02:31in three G P B ended up taking four bats
0:02:33so so you don't need that much feedback it's
0:02:36that's great
0:02:36now the problem is you know
0:02:38leveraging this work my general you find out that
0:02:41generally speaking
0:02:43the to achieve a constant gap from performance with the optimal um
0:02:47there were some rate that you chi with the ah with um perfect channel state information
0:02:51you need to have a codebook that scales with the number of users
0:02:55and all i also scales with the the snr so
0:02:58the the application here is that you you tend to need um high resolution otherwise are gonna be interference limited
0:03:04and and exactly like that was the nice
0:03:06a plot before
0:03:07you know you at at some point at high snr
0:03:09you wanna do multiuser mimo all
0:03:12and the reason is that we don't have enough resolution to become quantisation
0:03:16limited and um
0:03:19is the problem and then we're we're also seeing this right now a base station coordination we're people are applying
0:03:24multiuser mimo to distributed in a settings
0:03:27and we need just as much
0:03:28feedback back there if if not
0:03:31so the main thing here is that um yeah as we look at more sophisticated transmission techniques we we really
0:03:35need higher code-books we bigger code-books
0:03:39i becomes or problem "'cause" you have more feedback
0:03:43of the other issues is that
0:03:44in addition the quantisation i mean we also have delay and mobility me if you you have a large codebook
0:03:50the channel mobile you have to send a lot of bits back very quickly is that means the net feedback
0:03:54to have that link is
0:03:56is very high
0:03:58you know for channel the the transmitters not won't point about the channel and
0:04:01um this is a problem so
0:04:03well we're gonna proposes a predictive coding framework
0:04:06that uh will allow us to
0:04:08um take advantage of correlation in the channels so that when the channels not moving very quickly
0:04:13we can get effectively larger codebook sizes then we otherwise what have
0:04:18so i'll i'll explain a little bit about the key idea and a mention some some prior work so
0:04:23the predictive coding concept the mean it
0:04:25anyone working and source coding
0:04:27uh ah this would be probably the first thing that you would do and
0:04:31the idea as as follows here's base if you have a you have a source score over time
0:04:36we do is um you you find or solve a good predictor this might be uh
0:04:40and i'm a C predictor for example or or you have a favour you could plug it in
0:04:44and use use that predictor
0:04:46to predict the next value of the source and then instead of quantizing
0:04:49the noose you know the the new value of the source you quantise the error between the predicted value of
0:04:54the source and the source that comes at
0:04:56this is used to it
0:04:57with great success in speech and image i mean it's is
0:05:01use all over the place and the main things you need here i mean you need
0:05:04a a a predictor
0:05:06you need to be a quantized the error you need to be able to compute the difference between the predicted
0:05:11and the true
0:05:12and then you need to be a to track your um
0:05:15pretty good sequence
0:05:16and then um
0:05:17the decoder essentially what it does is a
0:05:20it also implements this pretty cut with this um predictor so takes the quantized there updates and that keeps implement
0:05:26the prediction and that gives a
0:05:28it's you keep updating the
0:05:29the decoder over time
0:05:31and so for example um
0:05:33what the traditional like use like a first-order one step predictor
0:05:36you might have to weights here you take a linear you're combination of the previous two symbols that becomes your
0:05:41predicted value and
0:05:43you know you can optimize as weights using
0:05:45mmse and
0:05:46and then if you wanna design the code but probably the typical way to do this is with the lloyd
0:05:50algorithm there is also a of structure techniques tree structure and so on
0:05:54and uh this this is
0:05:55this is known as widely deployed a mean people the doing this for
0:05:59at least twenty thirty years
0:06:01um the the problem here is that
0:06:04as follows in the limited feedback beamforming case are the information that we wanna quantise that normalized vector a normalized
0:06:10vector space invariant it's actually
0:06:13um represented by a point on the grass and of
0:06:16in this case
0:06:18the space of beamforming vectors are considered be points on
0:06:21G and come one so this is the set of sub-spaces of
0:06:25in N dimensional space with one to mention
0:06:27and you can write it like
0:06:29points on a sphere that's
0:06:31more of an illustration not exactly correct but gives you
0:06:34kind of the idea here
0:06:35and so it effectively the problem that we have with them if you back beamforming is that
0:06:40we want to do um
0:06:41predictive quantisation of a source that lives on this matter
0:06:45and um the and so that's that's the basic idea
0:06:48now let's let's figure out what the problem is we white why are we talking about this now after we've
0:06:52be doing them if you back for
0:06:54eight or nine
0:06:56well the problem as follows your so first all we have a subspace source
0:06:59that's going click
0:07:01i we need to generate the error
0:07:03and the problem with working on
0:07:05it manifold special the grassmann manifold here is that
0:07:08simple operations like um
0:07:10adding up to point
0:07:13not necessarily well the finite
0:07:15so like for example at if if i'm looking at those two lines there
0:07:18what is it need to add those two lines up
0:07:20you got another line or a me what we or getting points on a sphere
0:07:24you get a point that's not on the sphere ones you're done if you "'em" an euclidean space
0:07:29you need to do something special just a add these things that so the net of that is that generating
0:07:33a a a a a of the concept of a error it's actually
0:07:35not obvious
0:07:38well you also need to predict things on the subspace "'em" if you really wanna take this manifold full structure
0:07:42into account we shouldn't be predict euclidean space we should be predicting on on this manifold a cell so we
0:07:47need a we need a manifold predictor but
0:07:51what is an mmse manifold predictor or mean what what is multiplying what is weights to me all these things
0:07:56the they're not um very well the fine
0:07:59and so that's why it's be very hard to come up with even
0:08:03you know extremely simple um examples of
0:08:06of doing
0:08:07predictive coding here's because
0:08:09these operations are hard what that means is that
0:08:11in the predictive coder
0:08:12the generating the error is not straightforward want in the errors not straightforward
0:08:17predicting the the sequence not straightforward an updating the predictors not straight for
0:08:22and so
0:08:24each of these proposed blocks you need to have something you here
0:08:27and so there there has been a a lot of work on this i mean
0:08:30it certainly my group another
0:08:32i mean everybody wants to exploit the temporal correlation the channel mean it it's it's the right thing to do
0:08:36a mean we know the multi to my what doesn't work well as the channels very quickly and
0:08:40you know this temporal correlations just sitting there begging to be exploited
0:08:43so in trying to do that
0:08:45uh i mean prior the earliest work this is um
0:08:47and a certain as either and i need two thousand two they have some nice papers with
0:08:51with gradient based update
0:08:52there's some dynamic codebook approaches where you
0:08:55you kind of adaptively
0:08:57select a subset of a codebook depending on how fast the channels moving if it's very slow you end up
0:09:02with a
0:09:03very directional codebook for correlation if is going fast you and of a kind of a grassmannian codebook
0:09:08there's a these progressive or successive refinement techniques where you zoom and on the channel estimate depending on how a
0:09:16fast or slow it's going that's more like a tree kind of quantisation
0:09:19uh there there is
0:09:20there been some work using would euclidean prediction you could use the euclidean predictor in font size it and depending
0:09:25on how you do it sometimes like an actually work very well but sometimes
0:09:28you lose the um
0:09:30the phase and variance all the structure that we were trying to exploit the grassmann manifold first what
0:09:34so i can be a problem um
0:09:36probably the best
0:09:38or to seen is a different role approach
0:09:40like a at all and this is related to something that they've actually proposed
0:09:45i the three G P P standard where it's essentially you look at a difference between the last vector in
0:09:49the next vector here
0:09:50and that that approach um
0:09:52works reasonably well but it's
0:09:55does a really use anything
0:09:56stochastic and then there's also some rating expression ray compression techniques
0:10:00so i mean the main message here as i mean there there's definitely like a lot of work on this
0:10:04topic but
0:10:05there's not really a comprehensive framework for solving the problem that i just told you about doing predictive a quantisation
0:10:10the grass aggressive mouth
0:10:12so what i'm gonna do is i'm it's tell you about
0:10:15our approach to solving this problem and fortunately
0:10:18uh we do have a general solution i mean the general solution with you know general manifolds at many points
0:10:23is still an open problem on it so you about the solution that we have
0:10:26um um that we're proposing for
0:10:28the case where we have a a two points
0:10:30in so one a do is i'm a to build i'm a explain some of the mathematical concepts that we
0:10:34need to use of i've got the
0:10:36the equations here
0:10:38but i'm i'm a really focus on the the picture "'cause" the point is to get the intuition of what's
0:10:41happening the picture and then all to you how we
0:10:45so operating on
0:10:47in these kinds of manifolds here you can the fine
0:10:50uh several cards as one of them is
0:10:51is this notion of a tangent vector so
0:10:54this tangent vector is of i have two point X one and X two that live on the manifold here
0:10:58i can the fine a tangent vector which are you want to
0:11:03which is a vector that pointing
0:11:05in a direction
0:11:07of X two from X one
0:11:08and so far gone all to X one
0:11:11and member X one is a
0:11:12is a point on the grass of manifold we can represent as a unit vector so E the E is
0:11:17if vector that's orthogonal to X one but it's not a unit vector it has a link that's link is
0:11:23dependent on the court of distance between X one and X two
0:11:26and you can go through there's um so mice paper is it'll minutes miss that have
0:11:30some very general descriptions of the notions of tangents and geodesic spot but those are
0:11:35um if you look through which you're fine that
0:11:38because everything is so general actually very hard to pick it out and so one of
0:11:41the contributions here is that we simplified everything down for the beamforming case said it turns out this that the
0:11:46equation simplifies dramatically and there is actually a lot of intuition and equations old
0:11:51but through all
0:11:52but basically the tended vector here you can decompose an the two pieces one which has to do with the
0:11:57which is the arc link between X one and X two
0:12:00a second which is the
0:12:01unit tangent direction and then this is a function of
0:12:04the inner product between X one and X two and it's a function of a coral this
0:12:09so we're gonna use the tangent vector
0:12:11uh to give us a a quick one notion of air
0:12:15okay so some concept we use the stewardess so a geodesic this this is
0:12:19the curve that's between X one and X two that's the shortest path between these two
0:12:24you can come up with an equation for um for that's curve that can sit that's a function of you
0:12:30can write as a function of X one and X two but becomes more convenient be write is a function
0:12:33of X one in the tangent vector
0:12:35you can write it like this here
0:12:37you've actually got a um
0:12:38X one times the cosine of this thing
0:12:42sign of this thing in the T equal zero gives you X one T one gives you X two it
0:12:47turns out that the this
0:12:49and this orthogonal
0:12:50you can see that because if you remember that the tangent vector is orthogonal to X one
0:12:54do you this nice if orthogonal decomposition
0:12:56so we're gonna use this geodesic together something that looks like an addition
0:13:01now the sort thing that we need is
0:13:03we we want to predict so we wanna try to
0:13:05figure out where the sequence of points on the grassmann manifold
0:13:09and um
0:13:10we so we wanna get some function that's got some or we can optimize over it and
0:13:14so for this work what we did this
0:13:16where we're gonna take a really something very simple
0:13:18really use this concept of parallel transport of the parallel transport is essentially a way of
0:13:23uh taking this
0:13:25tangent vector X one
0:13:27and mapping it over to X two
0:13:30a remember that
0:13:31that's change a vector has to be orthogonal to the point you have to do this mapping in such a
0:13:35way that the orthogonality is maintained to the new vector
0:13:39and so you can do that and it turns out that
0:13:41it it actually looks something like the negative of the previous tangent back
0:13:45so we're gonna use the parallel transport concept to fine
0:13:48a predicted value
0:13:52okay so now i'm point how we use these mathematical operations to do press ready and
0:13:56pretty quantisation so the first thing here is let's generated error was suppose we have a predict sequence the predicted
0:14:02is gonna be denoted by
0:14:04X still to here
0:14:05so this X had is the state
0:14:07this is um
0:14:09this is gonna be known at both the uh transmitter receiver this is the predicted value here
0:14:14which just from the state and then read take the difference between the predicted now sir
0:14:18so what happens is we're
0:14:19this is the predicted value here that we know the receiver and transmitter this is a new observation
0:14:24we want compute this vector
0:14:26tangent vector that'll take us from here to here
0:14:29and so this is
0:14:30or really do with that the error tangent
0:14:32there we're gonna use the
0:14:34a value the new value to generate a error
0:14:38the second thing that really do here run a quantized a tangent vector
0:14:41now the the D first um reaction at least at me looking at this problem is yes
0:14:46nother grass many quantisation problem that's great the problem is is not actually grass many quantisation problem because a change
0:14:51of vectors not you or
0:14:53it has a a structure its orthogonal to X one
0:14:57but otherwise it lives in the space orthogonal to that space so
0:15:00it's a slightly different quantisation problem so we're gonna quantise that tangent vector in this paper here what we did
0:15:06is we actually decompose the changing two pieces one that's a addressed many quantisation one that's a
0:15:11not to again it's a complex king
0:15:13quantisation is kind of like a quality and then a direction
0:15:17so how we propose a codebook for that
0:15:20that's where we used to quantized
0:15:24and the third thing is the state update here
0:15:26so this is how we actually
0:15:28update we add that a to the um predicted value three use the geodesic for that's so what we're gonna
0:15:34do is we're gonna take the um
0:15:36take the X still like here
0:15:38we're gonna add the um
0:15:41a parallel trance
0:15:42as a problem transfer error vector here
0:15:45take a full step and that's what we're gonna get for the uh updated
0:15:49eek what's here yeah okay right so this is the
0:15:51state update were add the error on and in this is the predicted value
0:15:55so the project
0:15:56what we're gonna do is we're gonna take a um
0:15:59the difference between these two previous state factors
0:16:01and then we're gonna kind of like keep going in the same direction
0:16:05add that
0:16:06on to the X
0:16:07had here
0:16:09a right here so that's gonna be are are predicted value
0:16:11and i should point out here that um
0:16:13it in this paper we've simplified everything down counts were taking a full step for were
0:16:18can probably realise that going of false step it may not be the right thing to do and so in
0:16:23some of our other work
0:16:24we have actually optimized the step size and and you can optimize that as of uh over time in there's
0:16:29some cool things related to that and that that of here to uh i T A
0:16:33a that result
0:16:34does is our proposed predictor here
0:16:36and so that's a basically the idea here i mean i taking
0:16:39the geometric tools
0:16:40for for doing things on the grass manifold and i'm gonna write these interesting equations to you
0:16:46to get notions of prediction and
0:16:48to get notions of error update quantized error as i'm i use all of that now to quantise a correlate
0:16:53sequence over time
0:16:55uh for this spring for the simulations in this paper really use a
0:16:58uh autoregressive model
0:17:00which is which is rather standard though
0:17:02you could do better if you use to i a a uh
0:17:05clark gone slight model which is band limited has better prediction problem so this is actually more of a worst
0:17:10we're gone you consider the sum rate without of so we're gonna use or for a multi to my mow
0:17:15and we compare with using ran a vector quantisation which is essentially a good way of finding a a fixed
0:17:21or different sizes
0:17:22so that the first um
0:17:24result as i just wanted to point out that
0:17:26uh indeed using this the using this our of um it has result in effectively high resolution so this is
0:17:33i i i forgot the the parameters the simulation here but this is basically
0:17:36a nine to codebook
0:17:38the sequences varying over time in this is you know we keep quantizing it over time here and and it
0:17:42the coral dozens fluctuates because sometimes you're quantized values close to the true channel sometimes as far as fluctuating
0:17:48and then this is our approach down here also with nine bits
0:17:52so we're getting about
0:17:54vector five down here
0:17:55and our approach is um
0:17:57because we're tracking as over of things are changing over time we're not converging to zero because it's very
0:18:03um but this is just to show you that and it does actually
0:18:06decrease the error
0:18:07terms of average word of this
0:18:10now uh let's look at the
0:18:12uh sum rate comparison
0:18:15so the blue curve here
0:18:16this is a perfect csi as for users
0:18:19for in as we don't have any more the reverse is we didn't select the that's for users we just
0:18:23picked randomly for users and
0:18:25they have the same average snr so there
0:18:27there are like sitting on the same sort if you like
0:18:30so here's this the blue is the
0:18:32you know what we're trying to achieve here
0:18:34that's perfect csi
0:18:37this uh uh what colour that is
0:18:39kind of a mustard looking over that
0:18:41is a grassmannian i D
0:18:43go work here
0:18:44it's it it kind of a weird thing is that
0:18:46it turns out that there's not really good grassmannian code-books for large codebook size is so with a vector quantisation
0:18:52you can get
0:18:52more or less a grassmannian cold so it's
0:18:55you can get essentially a very good code
0:18:57have a geometry works so this actually is a good codebook book
0:19:00a good fixed to mention book you probably won't do much better than that
0:19:03and in here is the standard errors for result you get a high snr your france
0:19:08right here you can see these different uh doppler
0:19:11a a symbol period products here so this
0:19:13zero point zero one this is a very slow channel
0:19:16and this is a
0:19:17reasonably fast channel here
0:19:19well you can see is that
0:19:21in a of all with nine bits of feedback in a in a effectively a slow channel were able to
0:19:26get a
0:19:26very good
0:19:27performance tracking that ideal sum rate
0:19:30with a a a you know somewhat
0:19:32fast channel
0:19:34we still get a little bit of improvement but um the performance improvements not
0:19:38is not
0:19:39and so then nice thing is that you know this this means that
0:19:42you know every user essentially K can be updating their um
0:19:46this is for all the users have the same channel profile but the out of as flexible enough that mean
0:19:50every user since you runs are an adaptive algorithm that predicted
0:19:53they get the csi and if their channel happens be slow they have a better information of the channel during
0:19:57fast to get worse
0:19:59so it's it's um i-th that it is actually tracked quite well on the also includes
0:20:03a five millisecond delay which is a standard assumption three D
0:20:07so even with delay that we're not accounting for we still can track the the sum rate with uh
0:20:14under some
0:20:15reasonable some
0:20:18so uh that's essentially a tears what i've done and the stalks the proposed a
0:20:22a a many predictive coding framework
0:20:24and this is i think um um at least from a source coding perspective
0:20:27you know the the pictures nice the stories nice um
0:20:32there's a lot of limitations of first formally using the previous two point
0:20:36i like to use more than that
0:20:38clearly if i had um i figure out how to do this with three or five previous points of would
0:20:42just shown that to
0:20:44it's it's hard because
0:20:46these tangents all those concept are based on two point
0:20:48so you have three i U you gotta do something else there
0:20:52and so we have some ideas but that's that's proving to be very difficult
0:20:55um my student in his dissertation has some extension of this steve manifolds to so you can do this um
0:21:01we did some work with limited feedback mimo ofdm or you had to do a a people quantisation set of
0:21:06grassmann quantisation
0:21:07and this whole concept that works there as well but the feedback requirements really become high so i
0:21:13i i i'm not sure that this is exactly the right solution for higher dimensions yet but i think it's
0:21:17a good idea
0:21:19and then i should you the multiuser mimo results
0:21:23since we develop this i've another student who graduated
0:21:26who has
0:21:27um enhance this for multi cell my mel so where we actually predict the channels from interfering base stations using
0:21:33a similar concept what she was actually able to come up with frankly a much better predictor
0:21:37and that results in a at i C C
0:21:40and then i have another result
0:21:42where we've also come up with a better predictor for interference alignment
0:21:46and that's gonna appear period
0:21:48so the concept
0:21:50the concept i think it's good and that i still think of a lot of a lot of work to
0:21:53be done here
0:21:54okay so that's a i pa gesture not
0:21:56taking enough time
0:21:58uh this is not my virus
0:22:02oh i took exactly the right about time one
0:22:18uh would you know
0:22:20i need to get back to D
0:22:23it is not depend this this method on a a a a a a a you model list time
0:22:29um it's it's not really um
0:22:32very dependent so
0:22:34in the in the journal version which uh i i think it
0:22:38we actually just submitted the journal version recently and it should be an archive
0:22:41a or tomorrow we have simulations with other temporal like a a multi tap they are models and it
0:22:47still works well
0:22:48so we're not really exploiting any knowledge about the channel correlation
0:22:52you could probably come up with a correlated channel model that makes the art more for like
0:22:56i process but
0:22:57but we're not using
0:22:58you know we we don't have the um
0:23:02the the correlation function anyway
0:23:04the there was no i'm and right i mean mean we'd like to have that or not exploiting a yeah
0:23:07so you not exploding help
0:23:10right right
0:23:11i i mean it it's only coming see the this be it's really only coming when we quantise
0:23:17the magnitude of the tangent vector
0:23:19so the channels bearings slowly
0:23:21what's gonna happen is um
0:23:24we're gonna take the where the quantized nick here
0:23:26well when we quantized this so we quantized the direction
0:23:31um the magnitude to of tender vectors one very slowly
0:23:35we're gonna take a small step
0:23:36and what very quickly we're gonna take a larger step
0:23:39so that's the only place where
0:23:41the correlation comes in now the other paper i mentioned where we adapt the size
0:23:46they are the statistics come in but not in uh
0:23:49a way
0:23:57as one the back
0:23:59a this was not a speaker good heart
0:24:03so yeah
0:24:07these correspond to "'cause" i'm an easy to
0:24:09the best
0:24:11presentation of the information that one can combine E
0:24:14hmmm hmmm you do something on some in by do have to seek to unit
0:24:20um okay i can use be the last part at in
0:24:24do you have to two
0:24:25stick to to to to not terms
0:24:29you could do just one uh the percent direction by by
0:24:33some big to that's not normal estimates
0:24:37yeah so i mean the
0:24:38a the question is
0:24:40yeah whether you need the
0:24:42do you know link factors not something is that remember that this vector of lives on the grass amount for
0:24:46model is it unit length that's also phase and vary
0:24:48so those two properties
0:24:51holds out degrees of freedom of the vectors we have a complex vector with and in tree you have to
0:24:56and pram
0:24:57you know nor we have to and mine one this phase in bearing you have to and minus two
0:25:00so the whole reason that we use this on the grassmann manifold is so that we quantise
0:25:05less information and you can show from a point of view of capacity you're probably of error that all you
0:25:09need is this
0:25:11normalized direction information
0:25:13now if you wanna quantise the direction as well
0:25:18with the C Q why then actually your back your back to to and minus one and so if you
0:25:22want to combine those together you could
0:25:25you could do that and you will end up with something different
0:25:27but still you have the phase and variance so what the
0:25:30it when a be like it wouldn't be like a a random vector
0:25:34a gaussian still wanna be a gaussian problem it would be on a different fold
0:25:38right but but the
0:25:40doesn't really matter in this context menu you
0:25:43one to two
0:25:46good some some
0:25:48and some
0:25:49after at each step and and a doesn't matter
0:25:53oh to the one of is
0:25:54no number unit
0:25:58and you you don't care about
0:25:59the thanks and then
0:26:01so anyway
0:26:02oh okay okay so um i-th
0:26:04i can give you another answer of this
0:26:06this question here so one okay one thing that you could do
0:26:09is um
0:26:11grassmann manifold as the rim money in manifold it's local euclidean so in fact you could uh you could use
0:26:16kind of a standard predictor
0:26:18and then
0:26:19you could at you can add them up and you get a point not on the manifold and then you
0:26:22could if you need a you know norm you can project back to the manifold
0:26:26and uh indeed indeed that works if you're operating in a local enough region
0:26:30and so we found that um
0:26:32you an actually the analysis in the journal version this paper
0:26:35makes that it's like a small angle assumption but if you have
0:26:39more variation
0:26:40the problem is that
0:26:42that effectively
0:26:44that addition in projection
0:26:46gets you five away from this kind of addition operation so i i
0:26:50i think there there's is actually a case for doing that to
0:26:53it just depends and and the the one
0:26:56the the approach that we've taken this i C C paper actually has a better predictor that that doesn't predict
0:27:02on the grassmann manifold but we do updates on the grass amount so
0:27:06i think it's
0:27:07the your point is well as well taken
0:27:13copy break