0:00:13S S are french i'll and uh am working with professor say
0:00:16right i still
0:00:17uh the topic i going to tell us the about the performance limits of the lms based adaptive that works
0:00:24so in this work
0:00:26uh we compared the performance of that if you're and algorithms
0:00:29bit adder or them for example the uh centralized a block a or mess and uh
0:00:34uh a distributed incremental a mess
0:00:37and the we conclude that if we can optimize the come combination weights it coefficients for if you're in tree
0:00:44then we can show that
0:00:46that if you're in algorithms well be other algorithms
0:00:50so that start
0:00:51so that but that team that it works were talking about
0:00:55uh there consists of uh inter active inter connected to no
0:00:59there are uh interesting a common objective
0:01:03for example for this graph
0:01:05a we have bus
0:01:06eight nodes and their inter
0:01:09that's assume that you know model here
0:01:11every node the can get it access to some uh or men that they to uh do use book hey
0:01:17a i i uh and the use of K i
0:01:20and all the nodes are interesting you to estimate the top you not
0:01:27uh besides the data
0:01:29the every P can crack the from the local environment they can also exchange some information through with the
0:01:35a links between them
0:01:36and to help each other to improve the
0:01:39uh estimate
0:01:42the diffusion and
0:01:45can provide a powerful me can is em's two
0:01:47uh uh the adaptation over networks here i wa
0:01:51i would like to number size
0:01:52the networks
0:01:53uh can be also uh
0:01:55you can be a static and that worse
0:01:57and it can also be a mobile on that the bricks
0:02:00here um
0:02:02we're more uh for the mobile L not uh a more well that works
0:02:06uh every node in the network and moving
0:02:09so the um the topology of the network it's changing although
0:02:13a is change always
0:02:16here yeah so that will yeah uh introduce some very challenging
0:02:20uh things to the are written
0:02:23so the first the
0:02:24solution is a centralized solution
0:02:28here are you need a a a a powerful a fusion center sitting on top of the network and it
0:02:34a can uh a connected to every node in the network of work and eight uh i could the data
0:02:40from every node and the put the all the data together to perform the adaptation
0:02:46and we note there uh for this kind of a solution
0:02:49it's suffer from several or uh drawbacks
0:02:51the first thing is that you have a
0:02:53power centre which is the super node in a work
0:02:56so it's a suffer from uh um faders for example if you if the power centre uh used the
0:03:02a fusion center is down
0:03:04the everything's go alright
0:03:07another drawback is that it's separates from the random being figures
0:03:11so if
0:03:11and one of the link it drops
0:03:13you you do some node
0:03:15and you lose some data
0:03:16this is not a good
0:03:18a um so we may think about that distribute but the solution
0:03:22i one possible way is the using the uh incremental solution
0:03:26uh the increment of solution
0:03:28eight to uh update to estimate you a sequential manner
0:03:31here for example look at this speaker
0:03:34um let's state the
0:03:36no the one will start uh estimate
0:03:39uh uh a a will start uh
0:03:42so it first uh it will use its previous estimate top you i as one
0:03:47and the use its own data to update at this
0:03:50uh estimate two
0:03:52i so but uh five but one common i
0:03:55and the forward at this intermediate that
0:03:57estimated to know the two
0:03:59and then the no to to will
0:04:01but joss to this uh
0:04:02estimated according to his own data
0:04:05and so a and a well pass the estimate to the next to no
0:04:09and so
0:04:10after node eight
0:04:12the last the node node the it
0:04:14uh improve the um
0:04:16intermediate media according to his own data
0:04:19it will pass past the
0:04:21the resulting uh estimated to the know the one
0:04:24and which you which we will become the new estimate of W i
0:04:28alright for this kind of a solution
0:04:31it also so
0:04:32that the good point is that it does need a power a powerful
0:04:35to rinse under this is good but it also suffer from or drawbacks
0:04:40one drawback since the
0:04:42um it also
0:04:43so for from the the milling feet
0:04:45for example if
0:04:46you uh
0:04:48it's this
0:04:50is that a maybe you can find another way or wrong
0:04:52but if this link is the
0:04:54then you cannot do it
0:04:56another drawback is
0:04:58if the network is moving that topologies
0:05:03do in the adaptation you need to
0:05:05repeatedly calculate or a a cycle through the network
0:05:09on the fly
0:05:10this is the not trivial you know that before it is it's uh
0:05:14and P hard problem you need to do it and usually it's some very very hard
0:05:19alright right so
0:05:20we may think about a outer solutions one possible solution is the diffusion strategies
0:05:27a a lucky likes shown in this figure every node will perform
0:05:31two things
0:05:32one thing is that adaptation
0:05:34according to its own data and the thing is that exchanging changing the
0:05:39uh inter mediate estimate with its neighbours
0:05:43uh further improve his own estimate
0:05:47a every communication is the down in the local area a so uh it doesn't consume a lot of energy
0:05:54and uh
0:05:56usually just all with them is a robust to to the run them three adding feel we be queries
0:06:01if anyone of the link it's down you can always
0:06:04and you you you don't lose any noting thing the now work
0:06:08or and so there are two
0:06:10different kinds of algorithms one is the called at that then combine at C is
0:06:16it's first the perform the adaptation
0:06:19then do the combination
0:06:21uh another a one is the
0:06:23uh combine then that that C T a strategy
0:06:27uh just
0:06:27for first the perform the uh a combination stuff and then do that adaptation
0:06:32i here we and the size that win uh used in a complex combine
0:06:36a combination coefficients
0:06:38uh to guarantee the compared
0:06:43right so before we compare different algorithms rhythms we need to be will a weird of a two important factors
0:06:49uh of the performance of the uh a if order one is the convergence rate
0:06:54and a an otherwise the state
0:06:56a i mean squared error
0:06:58that's how look at
0:06:59the speaker
0:07:00so this is that you learning curve of uh and the long
0:07:03uh adaptive few adaptive filter
0:07:07basically it you can't do by the the curve into two parts
0:07:10and uh one part is the trend in the face
0:07:13and the other part is that steady state
0:07:15uh in the trends in the face were
0:07:18interest in the uh how fast
0:07:20the uh this curve for drop down
0:07:23and in the type state or
0:07:25interesting to
0:07:26how much errors to remains in the steady state
0:07:30so when you compare different rhythms you need to um
0:07:34be fair
0:07:36for example here in this work
0:07:38because course are more interested in the steady-state state for formants
0:07:41so we fix the
0:07:43uh convergence rate of four
0:07:44every read them
0:07:46so that means are pretty algorithm we have the same convergence rate in this that in the trends and fate
0:07:52and the way we compared to uh steady-state mean-square error
0:07:58to a a simplified duration and
0:08:01to and high light a uh i'd so we use the to note networks
0:08:05it's simple called but
0:08:06it it the the
0:08:08uh considering the um
0:08:11uh uh the and uh
0:08:15and the
0:08:16oh of course that the to note
0:08:18that the that that that an a work has to knows already use it a lot of a reach and
0:08:23in interesting dynamic
0:08:25it's easy to uh analyse
0:08:28right so let's have a
0:08:29a can't the uh algorithms for to note networks
0:08:33so this one is for a at C and this one for C T
0:08:37um the of uh is the combination coefficient of four
0:08:40at a four
0:08:42inter mediate to estimate from the one self
0:08:46it's here
0:08:48and the to
0:08:49is the
0:08:50combination weight coefficient of for inter mediate
0:08:53as to a from to two
0:08:56alright right um
0:09:00so after some a a considerable
0:09:03how our job or a
0:09:04you are
0:09:05uh we can get this too close to form
0:09:08yeah mse
0:09:09i expression
0:09:12this is the a network average yeah see you we should define in this way
0:09:16and we can find out that
0:09:18this the mse is a function of the combination coefficient alpha and the beta
0:09:25we can do some optimization over this two arguments
0:09:28to minimize this to you messy
0:09:30and the result is shown in this
0:09:32uh slide
0:09:34here we show that
0:09:35a after some comes out larger bra which is nontrivial trivial
0:09:39um we can show that this combination weight i'll for you close to this and the bit i close i
0:09:44is done optimal one
0:09:46and this combination rule is
0:09:49a a co uses coincides with the um
0:09:52i i'm true
0:09:53uh in the digital communication area
0:09:56which is a for the uh
0:09:57the rake receiver for this we me
0:10:01and the in this uh two
0:10:04optimize the the uh combination coefficients backing to a you sees expression
0:10:09you are get the uh minimize that you man C for a key C rhythm and the
0:10:14minimize the mse for C
0:10:17or with them
0:10:18here the role the uh role is the uh uh
0:10:22i mean and the commerce convergence mode the for the diffuse are out with them
0:10:27and a calm eyes the uh
0:10:29re sure that
0:10:30the noise of variance the for the two nope
0:10:33and for D block are a mess
0:10:36the first thing we need to do is to normalize the starts size the for this are rhythm
0:10:41bic queries
0:10:42uh in a texas i so we can show that
0:10:44if the step size it's very small or
0:10:47they you block our mass can be uh approximate as uh incremental a mass
0:10:52so for incremental or mass we have two consecutive tape adaptation steps in one i i iteration
0:10:58so to current the same convergence rate
0:11:01we need to normalized
0:11:02the step size in this way
0:11:05and the yeah the E M S this are with them in in here
0:11:11the role prime
0:11:13is the common in the convergence small the for this all words them
0:11:16and we can have a look at the
0:11:18if the step size the me is very small this
0:11:21term is dominant the by one minus two meals segment new square
0:11:25and in the previous vice we can see the
0:11:28the dominant calm uh that mean and a part of for this convergence jensen
0:11:32it's a
0:11:33also one minus two new segment
0:11:35use square
0:11:36so they're almost the same
0:11:40this is the uh a yeah messy you for incremental or immense
0:11:45similarly we need to normalize the step
0:11:47and here we gave the revelation to show that if this
0:11:51stepsize it's small enough
0:11:53the incremental our mass
0:11:55and the
0:11:56block are our mess there are almost the same
0:11:59just plug in the
0:12:00equation here and that you can or the high order more terms here
0:12:04you and up with this expression
0:12:08this is the mse expression for the incremental algorithm
0:12:17here we also propose a we also uh put that the standalone there are mess here
0:12:22and so there there's no cooperation between the two nodes
0:12:26then we were compared to
0:12:28the yeah mse performance uh
0:12:30with with this over algorithm
0:12:34so this is the uh uh results of the come and
0:12:37so have a look at this highlights part
0:12:40so the the optimized you say
0:12:42it C are with them
0:12:45is a slight will slightly better than the optimized the C T A a and is better than the outer
0:12:50three algorithms
0:12:54just as shown by D uh theoretical results and also fish
0:12:58thought uh demonstrated in the simulation
0:13:03okay guess so this is a for the network every G M S C
0:13:06a here we we can see a team optimize at scenes the best a one
0:13:11and another interesting comparison is we compared the individual yeah ms
0:13:16uh yes the mse
0:13:18of these stand the long filters with the uh
0:13:21a diffusion
0:13:23uh as algorithms
0:13:24and the result shows that for optimize the if you C
0:13:29oh of also of the two nodes kind reach a
0:13:32uh yeah messy you which is less than
0:13:35either either
0:13:36one of the individual
0:13:37uh future
0:13:39oh this is some very interesting course
0:13:41this means
0:13:42even that would not
0:13:43with the lower noise level can benefit somehow from the information sharing them is the bad and now
0:13:51this a um
0:13:52this interesting interesting be course
0:13:54we can imagine that every node if if the node a selfish
0:13:59eight only want to a a when it can uh core parade with the each other or with other note
0:14:04it want get something from it
0:14:06if it can a in a from the corporation it well not do it
0:14:10here we show that
0:14:11if you can but optimize the combination weight
0:14:14then every note about
0:14:16if a the from the corporation
0:14:17that means
0:14:18for example we we use these are with them
0:14:20to model the animal behavior
0:14:23i think it's uh this is a kind of a a reasonable
0:14:26a to do it
0:14:28for at a uh them you need a sum a condition to
0:14:33to show that the you could note also
0:14:36a a if benefit the from the corporation
0:14:39this is the is simulation results
0:14:43first let's have a look at the uh trends in the face so all the are with them have the
0:14:49convergence rate here
0:14:51and uh
0:14:52the optimized the at C and a C T A you can reach the lowest yeah mse
0:14:57and uh
0:14:59is used is a slightly better than it
0:15:01uh it is a it's that slightly better than C T A here
0:15:05and uh it also shows that
0:15:07a a block error mess
0:15:09and uh
0:15:10uh incremental error T have almost the same
0:15:13uh status stay the performance here
0:15:16and uh the worst the one is not corporation
0:15:19as expected
0:15:21the simulation per foul he shown here so we use the
0:15:25uh a few order of the lance ten the step size is uh a point zero or five
0:15:31uh the a noise the and have for no the winds point five
0:15:36the variance
0:15:37for no to is a three
0:15:38and uh we also assume the whites
0:15:41uh regressor requires or to not there uh the power of the regressor is one
0:15:48with similarly to two thousand uh iterations and over a uh and average the curves
0:15:53over one thousand file
0:15:56a here are some references as we can see that
0:15:59um by using that to field and algorithm we can model manning animal a here it behaviours in the nature
0:16:05for them up back to maria
0:16:08D honey bee
0:16:09and the uh lies
0:16:11and uh feast screws
0:16:13and also we can use the are with them for they uh
0:16:17a me radio radio
0:16:19alright right so i'm gonna down here
0:16:26i guess for the gaussian case that you use and use simulations to mikes sense that this maximum margin come
0:16:33component thing and this is optimal
0:16:35um oh one if you could make any comments or you done the thing with um
0:16:40uh uh be tiled
0:16:43so that was to
0:16:47whose what we have done in i would scale in that you always get something from taking into account camp
0:16:52the bad stuff
0:16:53but in some more gas in problems uh you the rules to
0:16:57yeah what this war filling stuff for a right we've know
0:17:07deletion i D the message of okay
0:17:09and the idea here you have to notes one has good noise the other that has bad not at each
0:17:14one of them is to estimate some channel some unknown parameter
0:17:18if they do it independently of course T was uh and all we get it was estimate right
0:17:24now if a a of them to a but it let's see using diffusion
0:17:28we expect big the bad not to do back to the "'cause" he's getting access also the information from the
0:17:33good no
0:17:33one can close it from ellis is is that the good little also do but
0:17:37even though he's getting bad the information from the but not of "'cause" so that's one conclusion
0:17:42not to do i have this expression there was no assumption about a gaussian at of these simulations actually good
0:17:47is also cost and
0:17:49but the mse expressions that C live
0:17:51do not the singles and
0:17:52to so this was not cope was a but the other one close which is very interesting from the table
0:17:56you go
0:17:56to the table is
0:17:58what if you take these these two not state the they and send it diffusion sent
0:18:02i a sense that can do block lms that only complaining lms processing
0:18:06so if you just sent that can do block lms
0:18:09well it do that the then
0:18:10the this T do to the and the and the and the not to that they okay
0:18:14and the on set is is actually diffusion will all the for even and diffusion solution
0:18:19but this is counted that into it right
0:18:21and then you might say well it's to just send that can do anything light doesn't it just implemented efficient
0:18:26algorithm a diffusion sent
0:18:28suppose to can do that but then what's the point the diffusion algorithm can be implemented in it see that
0:18:32it man
0:18:33so that you know that it still call this this call he it is a is a of just a
0:18:38all that was diffusion you can help the form the block lms solution
0:18:42that implement in a fusion center of P
0:18:45but the expressions that he did i they do not assume cost it to but i think in the simulations
0:18:49are showing assume probably and yeah
0:18:53thank you
0:19:06you want to you or a region you want a lie for the a be the
0:19:10coefficient efficient
0:19:12you do rhymes are using the study the uh
0:19:16could that the uh N E
0:19:18a better way to derive a adaptive all on the fly
0:19:23sure thank you
0:19:28a good question like i watch is doing is the i in the optimal weights
0:19:32for optimized the steady-state performance we have another it but actually a the published
0:19:36it was presented then i cast us to use that and on they paper that B it
0:19:40would yeah i i i i have that on the fly and the nation we find what the optimal weights
0:19:47and at that it on the fly yeah we have we have done
0:20:03yes you you know to like this
0:20:05yeah i i can see we will expressions but do these optimal combine yours require a local information only your
0:20:10you're required
0:20:11a it's a statistical profile of an able to find them
0:20:19yeah have
0:20:19here the optimal a a cool
0:20:21combination coefficients need
0:20:23need to know uh
0:20:24noise per for L per file course the network
0:20:28uh we uh we are uh
0:20:30here here were trying to find out some more with that
0:20:33you can estimate or somehow
0:20:35to know a noise profile cross the network then you can come up with this
0:20:39oh that's a good question this expression as you can see that the optimal coefficients depend on the noise profile
0:20:45in the network okay
0:20:47so this is it performance to it's that it's that time to tell you what's the best you can hope
0:20:51for if you knew this information
0:20:53in the article i first to before the one would you at that is coefficients on the fly that is
0:20:58done based on a thing as they that that you have that the a lot of there's everything is estimated
0:21:03on the fly yeah
0:21:05i think we should move along a get to be fair to the law at to add to the last
0:21:09is the last speaker here