0:00:13so
0:00:14my name
0:00:15but i
0:00:17time
0:00:18my
0:00:19for you expansion much time
0:00:22for non
0:00:23system identification
0:00:25and uh
0:00:26this work was done in in uh what we was home
0:00:30and get a and slide yeah
0:00:32the
0:00:32or
0:00:35so uh
0:00:36i go on
0:00:37to the outline
0:00:39notation
0:00:40after representing the motivation and talking about two basis function
0:00:44no
0:00:45and
0:00:46and because "'cause" we have more than
0:00:47one type of basis function
0:00:50and be talking about a piece is genetic
0:00:53the of signal modeling dft domain
0:00:56which you see here that you can cost on to an equivalent multichannel
0:01:00i
0:01:01that's bringing
0:01:02multichannel adaptive
0:01:04in into the king
0:01:05and and this is a section you see how the basis selection
0:01:09from here basically C B but the fixed the multichannel system i
0:01:13and you the can
0:01:18so uh
0:01:19uh
0:01:20it's well known that you to of hammerstein fine structure is that
0:01:23enables to
0:01:25of a
0:01:26non linearity the memory for the by a linear fire sister
0:01:30so if you want to have a a a of i've
0:01:32system i and
0:01:33location then we need to cater
0:01:35for the menu the less nonlinearity is but
0:01:38which we can model by means of a power C D's or
0:01:41or the norm
0:01:42expansion bases
0:01:44which is somewhat the more traditional and used at this one
0:01:48and uh here you somewhat
0:01:50propose and investigate the or for you C
0:01:53which is an orthogonal are mutually orthogonal
0:01:55expansion bases
0:01:57and the X that investigate or in the context of what if
0:02:00what it have
0:02:01on the ensuing
0:02:02equivalent multichannel system i
0:02:04a with respect to the quality rate of convergence
0:02:07and also
0:02:09quality of the learning
0:02:10of the underlying nonlinear a
0:02:14so that just to bring uh everybody on the same page with respect to notation
0:02:19as how much time structure
0:02:21yeah this input signal the acceleration signal
0:02:23this a nonlinear mapping it under just nonlinear transformation to get the non linearly map signal
0:02:29it gets you know can vol
0:02:32but in a we are in your system to get T
0:02:34uh auxiliary signal D it's gets
0:02:36so and post by this year in noise
0:02:39finally give the observation
0:02:40so these
0:02:41this modeling that i'm been talking about is about
0:02:44nonlinear at
0:02:45here
0:02:49so uh now for the basis functions or
0:02:51types
0:02:52we can of the nonlinearity in the hammerstein model
0:02:55by a such a a nation firefighter are basically
0:03:00i i guess order
0:03:01basis function of the corresponding
0:03:04which
0:03:04and she you over here is
0:03:06the expansion or
0:03:08i
0:03:08if
0:03:09so five i were to long to some sort of polynomial bases then simply would have a
0:03:15it has the i that's power of X
0:03:17and he of i would then
0:03:19be the corresponding uh autonomic coefficient
0:03:22and correspondingly if
0:03:24this for the forty your basis
0:03:26and i have would be a sinusoidal just form
0:03:28the for C D
0:03:30and then a a a or whatever the for your coefficient
0:03:33and and or rather to L would be the fundamental period
0:03:36and the selection of this fundamental period is somewhat critical
0:03:39but to give a short we can do that
0:03:43a assuming that uh we have
0:03:45any given a nonlinear mapping F F X
0:03:48and the data range
0:03:49normalized minus one plus one
0:03:51then if you were
0:03:52to if
0:03:53compute the
0:03:54could which is the power C D's and you can minimize
0:03:57expression of or rather than take here in the least sense
0:04:01this can be any number of strategy are not focusing on that you can use a map
0:04:04might have only for that
0:04:07and for the for U C D's of or for you C is because you know the nonlinear that it
0:04:11would be somewhat an or function
0:04:12so we can use this close form expression for the computation
0:04:16and again this
0:04:17half of the fundamental peter comes into the play
0:04:19importance of selection of L is that
0:04:22this is that data range and we know that the data let's is and plus one
0:04:26then we can select a one to be bit
0:04:28greater than one
0:04:29because of it is one and all the kind of so
0:04:32near the plus minus one range would go to zero
0:04:35and you wouldn't be able to model the data on the thing
0:04:39so a a and an example of the manifestation of the non linearity i'd take it as a clipping function
0:04:43a or whatever the linear range and then it's clipped by X max which is a threshold
0:04:48and
0:04:49or we set have experiment or out of experiment that other forms of clipping functions of a nonlinear functions
0:04:54but this is as a very good example
0:04:56have a discussion
0:04:59so that the first uh a result that you you what has about a
0:05:03to fitting ability of the boat C so i selected D fundamental period of a is one point five
0:05:09and you can see that the clipping threshold of of of the simulated or the clipping function his point one
0:05:15so we have a a minus point one and plus one one what here
0:05:18and the
0:05:19the expansion or or is i
0:05:21so we see the start line depicts this how forty a bases is basically modeling this
0:05:25non linearity
0:05:26then we also see how power
0:05:28polynomial a normal bases is modeling
0:05:30you might
0:05:31but even if i don't to you system distance but the for you based basically you has a five db
0:05:35it
0:05:36on the modeling
0:05:37but that's must
0:05:38what i'm focusing on a have because then
0:05:40would have the argument let's increase the model
0:05:43but in we'll see
0:05:44the and so on so forth
0:05:45but it is true that
0:05:47forty bases is
0:05:48a contend or and it comes to such model
0:05:52so now the basis generic a
0:05:55signal model in the dft domain because of be like to have a the system identification
0:05:59frequency domain multi-channel forms so that's i we select dft to domain
0:06:04and uh
0:06:05because we are going to uh going to the dft domain so we try to
0:06:08a find a block based definition of the input signal
0:06:12no here you see or is basically the frame shift in M
0:06:14a frame size
0:06:16so in analogy to X
0:06:17a uh i can
0:06:18find the block these definition of the non linear in that
0:06:21input signal
0:06:22and i do that way
0:06:23i in this nonlinear mapping to all these individual samples
0:06:27the this vector
0:06:27and this for
0:06:29and uh i can replace this nonlinear mapping by
0:06:33such a some nation form which i sure and one of the previous slides
0:06:37i can do this
0:06:37can be rearrangement spent the summation sign out here out
0:06:41and then i have this
0:06:42vector
0:06:43compact notation X
0:06:45which is basically the idea that order
0:06:47off
0:06:47the nonlinearly mapped input signal
0:06:50that's a block this definition sort or is made from the eyes or of the bases from
0:06:55which is in it's
0:06:56channel from right
0:06:59now we want to convert
0:07:01if you do means what we do is you like for you mate
0:07:03i nation
0:07:05and to see what use going on we can
0:07:07a place
0:07:08a this definition by the summation
0:07:10which will bring
0:07:11coefficients
0:07:12more two
0:07:13a play
0:07:14and then we can keep the coefficients outside side and then you would have a higher order
0:07:18of the non unit the input signal the dft main
0:07:21uh
0:07:22if
0:07:23actual
0:07:24and uh uh now that we have this
0:07:25a formal definition
0:07:27of the input signal or the non email and signal
0:07:29give you've main we can go a morning
0:07:31you you but or leaner i R
0:07:33system
0:07:34so we basically model and minus are non-zero coefficients
0:07:37basically a uh to make sure that overlaps safe
0:07:40strange
0:07:40remains
0:07:41that it later on
0:07:42so this again as forty major
0:07:44is so
0:07:45time domain vector
0:07:46we have
0:07:46yeah domain con
0:07:49so uh we know that the observation can you given as a function of the convolution between B
0:07:54nonlinear mapped mapped input
0:07:55the equal part
0:07:57oh as the observation noise
0:07:58and this right S over a that is that some special
0:08:01rather sensor
0:08:03uh a just to linearize the convolution dft domain
0:08:06the scene worse for year
0:08:07as this the padding the and for your
0:08:11i can
0:08:13by compile this all of the form G
0:08:15that i can combine G an X to get a to get a C
0:08:18so C is basically a
0:08:20constraint
0:08:21of the non
0:08:21that's
0:08:23so uh this there a compact expression
0:08:26for the dft domain observation
0:08:28which we can
0:08:29for the uh
0:08:31really
0:08:33to get a equivalent multichannel structure
0:08:36and that the way you go about doing that is your base this a vector
0:08:41to to the by the summation expression because we the to right this earlier
0:08:45and instead of using the summation sign we can use such a a matrix
0:08:48location
0:08:49so
0:08:50this basically couldn't
0:08:51and i i
0:08:52do so what yeah the identity matrix to make sure the dimensions are
0:08:56consist
0:08:57and instead of then combining a the end of it these components what i words a combined you could possibly
0:09:02get
0:09:03effective or virtual it couldn't multichannel four
0:09:07and uh then i combined your again but this composite matrix text
0:09:10the second it may trick
0:09:12and this is
0:09:13the observation model
0:09:14a multichannel
0:09:15position the most
0:09:16calm
0:09:18so how does it basically
0:09:19look uh
0:09:21dark grammatic
0:09:21a you this is what
0:09:23happened
0:09:24we have combined he's
0:09:25some
0:09:25of the nonlinear
0:09:26and from which has as can be any for your more power
0:09:29or if anybody
0:09:31good idea here
0:09:32and we combine it with a the people above
0:09:34to get a were true channel had
0:09:36and then we C D's excitation signal
0:09:39and
0:09:40no
0:09:40appreciate here and i see fans
0:09:42of using the forty am wanting because
0:09:45and this is a multichannel identification problem than all the ancient problems of multichannel channel
0:09:50adaptive filtering with resurface
0:09:52and if and that's a lot of correlation between these excitation signals than i would be
0:09:56some works in a problem
0:09:58and if i have a forty a uh basis then uh all these
0:10:02a a a a a a a signals the excitation signal quite each and we usually problem
0:10:06and that would be a very good thing for
0:10:08convergence
0:10:09you you to that
0:10:12so uh uh know and we want to the results and uh what whatever
0:10:15used
0:10:17uh for the multichannel channel uh
0:10:19evaluation
0:10:20so this is a a not a very fancy other this is
0:10:23block lms type
0:10:25a a multichannel frequency-domain adaptive
0:10:27a given by
0:10:28uh used to equations be
0:10:31a function
0:10:31and the update equation
0:10:33you
0:10:34the step size
0:10:35and
0:10:36basically the step-size size contains
0:10:38a step size for
0:10:39channel
0:10:40and which is a function all this adaptation constant
0:10:43and uh the estimate
0:10:45of the power spectrum and
0:10:47and the adaptation constant like in this strange
0:10:50and its estimate can be achieve are obtained i've such a
0:10:54a a recursive equation
0:10:55okay gamma headers the forgetting factor and the range you one
0:11:00so uh he can
0:11:01a
0:11:01for the evaluation was that were operating the multichannel frequency-domain adaptive filter
0:11:07uh with a frame size of two fit
0:11:08sex
0:11:09a frame shift of sixty four
0:11:11and the linear to nonlinear power issue of the snr and L
0:11:14given by such an expression this basically differ
0:11:17the input signal and the nonlinearly mouth
0:11:19was five you in twenty db
0:11:21if i have a time just discussed of twenty db case as well as not then
0:11:25just a
0:11:25a five but
0:11:26the signal to observation noise or
0:11:29in the you cancellation or or you code
0:11:31observation noise
0:11:33or a show has been kept
0:11:34as for the
0:11:35sixty db because
0:11:36we want to concentrate on the nonlinear performance
0:11:39a the robust
0:11:40because it than your and observation
0:11:42two types of is as we have a C D
0:11:45you what for you C D's
0:11:47and a performance measure would be the relative
0:11:49uh error signal attenuation given by such an expression
0:11:52and are also
0:11:53inspect the estimated nonlinear nothing
0:11:56not in ins
0:11:57and the on a mapping you would
0:11:59track
0:12:00the nonlinear coefficients C
0:12:02and that we do uh by this expression which gets just nonlinear
0:12:06coefficients
0:12:07and the least squares sense optimal in the least squares
0:12:10there this stuff you would have a or i is the estimate of the I channel
0:12:14all the eyes were true channel
0:12:18so uh this is the performance comparison for uh
0:12:21fight T case and that would basically mean that the threshold uh
0:12:24the clipping threshold this plus as point one
0:12:27so the first uh a algorithm that and using as a anchor is to a linear or uh and stuff
0:12:32the single channel that stuff that any provision
0:12:34for each have a or sorry nonlinear processing and easy it converges to the area of
0:12:39eight db
0:12:40and then we have to put normal model with how a series
0:12:43it
0:12:44uh
0:12:44uh
0:12:45a it's it's of some it'll or yeah and then it
0:12:48but this slow
0:12:49P convergence
0:12:49street that used to go up
0:12:51my believe is
0:12:52keep on going up somewhere or whatever
0:12:54and uh but we see that
0:12:56somewhat slower or and B i
0:12:58or or comment
0:12:59polynomial model with gram schmidt
0:13:01uh uh data adaptive orthogonalization
0:13:05and uh uh we see that me to be the performance should so
0:13:09and then we finally have to for you model without any additional orthogonalization we see that
0:13:15somewhat matches
0:13:16is
0:13:16a better
0:13:17then the polynomial normal plus crash
0:13:19and hence the notion
0:13:20you have orthogonal input
0:13:22to the multichannel structure
0:13:23then you would have this
0:13:25same effect as the gram schmidt orthogonalization provides
0:13:28so better convergence and higher
0:13:30and that's have a look at the quality of the nonlinear that you have
0:13:33for
0:13:34so this is the ground truth plusminus point one
0:13:38and we see that this is the green guy which was a polynomial night model without any gram schmidt
0:13:43so it is a very of that
0:13:44so there's a correspondence between the quality of the nonlinear to for a and you have a but we solve
0:13:49before
0:13:51and then we see that
0:13:52putting on on the gram schmidt
0:13:53a forty T once they are at hand and hand
0:13:56two was it fringes
0:13:57this is put a will that crash from goals of it you on both sides
0:14:01but i'm not so what it about that
0:14:03for now because
0:14:05my data is more concentrated in the range plus minus point for
0:14:09what of course if there's any hope live and the data would be if
0:14:11in that case
0:14:12would
0:14:13consider
0:14:14for you to be the better one
0:14:17and this is a a to have time
0:14:18yeah okay
0:14:19so that a performance comparisons for twenty db twenty db basically means that my uh
0:14:25uh
0:14:26clipping threshold as plus minus
0:14:28three
0:14:29and this is a a a a my to the nonlinear case
0:14:33this means my higher order polynomials
0:14:35or not
0:14:36that much and my to the coefficients
0:14:39this basically means that
0:14:40my polynomial a model and a point and model because the gram schmidt are having a very nice day
0:14:46and this
0:14:47this you see that is a
0:14:49difference in the rate of convergence and this happens because
0:14:52the forty a model even if a
0:14:54might be be mean my
0:14:56my
0:14:57the the non linear any but if it is linear
0:14:59to all the channels of a for a what would be active an adaptation
0:15:03so that to take it
0:15:04time
0:15:06that would show up or take it's still wouldn't the convergence
0:15:08but still a goes higher
0:15:10ten the other approaches
0:15:12and this uh linear model left off is
0:15:14corresponding
0:15:15is is
0:15:15some in of the
0:15:17and you range as the snr and it
0:15:19the there is no direct correspondence between the nonlinear in snr
0:15:23and the virtual source of noise this that is this
0:15:26still
0:15:28so as we seen that these two guys were also not perform that bad and
0:15:33so
0:15:34or not performing that that and so he's see there are also a really here are also not performing that
0:15:38bad
0:15:39but the for you guy was much better than both of them so easy
0:15:43follows
0:15:43the ground truth somewhat better
0:15:47so uh
0:15:48bring
0:15:50illusion
0:15:51and so we uh sort of presented
0:15:54a a a a four cylinder to of nonlinear hammerstein model
0:15:58a a the tradition tuition C D's and or or or or on four you D's
0:16:02we presented the signal model and block frequency domain
0:16:05which basically uh was to ride by contain bass
0:16:09channel
0:16:10derivation
0:16:11and was for by an efficient multichannel representation again into line on you're fifty
0:16:16and uh in the results by a multichannel adaptive identification we showed to be orthogonal for us he's
0:16:23lines up with a polynomial modeling of the gram schmidt orthogonalization
0:16:28in the sense that it uh a uh uh that's high error signal attenuation
0:16:32and effectively imitates the underlying nonlinear
0:16:36oh you mister
0:16:37so that from and say
0:16:40Q
0:16:43i
0:16:47or we can thanks
0:16:48is
0:16:52i help these
0:16:58so
0:16:59oh right in your first the results on the gram schmidt response of the polynomial had a fairly high variance
0:17:06uh
0:17:07a a lot of fluctuation
0:17:09yeah
0:17:10do you have one more fluctuations
0:17:12uh i and the response here
0:17:13yeah
0:17:14could you corpsman because when you we're going that result with the fine db in a to the twenty db
0:17:19there's virtually no fluctuations at all the twenty db be a yeah i can yeah i and so but
0:17:24the response is not as good so could you comment or half
0:17:28somehow have me
0:17:29difference is the in your response
0:17:31may may be it yeah be contributing to the change
0:17:36i i i i i
0:17:38two my polynomial
0:17:39model that's
0:17:41two
0:17:42this area
0:17:44right right would be to stay have yeah
0:17:47so if i have a let's see the twenty db case
0:17:50and that would mean that
0:17:51these coefficients
0:17:52for a polynomial series
0:17:54don't have a lot of mine to this means that these channels don't have a lot of excitation
0:17:58so this basically means
0:18:00a gram schmidt orthogonalization
0:18:02doesn't have a lot to offer four
0:18:04doesn't have a lot of influence
0:18:06and that that like tuition in the fine db can use
0:18:09could be because there might be some smoothing or something that
0:18:12a further applied to the grams schmidt organisation
0:18:15but uh i said that the by something but have been done because that does not focus of uh
0:18:20what i was actually trying to do
0:18:22so what i basically we is the difference in performance which does not uh come in the twenty cases because
0:18:28there's not much room
0:18:30of and to to the ground truth can provide
0:18:33because of the depleted excitations those
0:18:35i or channel
0:18:41and yeah
0:19:04but he was asking
0:19:05the non but not for you
0:19:19yeah
0:19:26oh
0:19:31i i on a say uh yeah okay yeah
0:19:33i i don't uh for that directly because it this uh off a plate
0:19:37that you describe like
0:19:39pose for you consider it
0:19:44well with all this like this