S S are french i'll and uh am working with professor say

right i still

uh the topic i going to tell us the about the performance limits of the lms based adaptive that works

so in this work

uh we compared the performance of that if you're and algorithms

bit adder or them for example the uh centralized a block a or mess and uh

uh a distributed incremental a mess

and the we conclude that if we can optimize the come combination weights it coefficients for if you're in tree

rhythms

then we can show that

that if you're in algorithms well be other algorithms

alright

so that start

so that but that team that it works were talking about

uh there consists of uh inter active inter connected to no

and

there are uh interesting a common objective

for example for this graph

a we have bus

eight nodes and their inter

that's assume that you know model here

so

every node the can get it access to some uh or men that they to uh do use book hey

a i i uh and the use of K i

and all the nodes are interesting you to estimate the top you not

and

uh besides the data

the every P can crack the from the local environment they can also exchange some information through with the

a links between them

and to help each other to improve the

uh estimate

and

the diffusion and

strategy

can provide a powerful me can is em's two

uh uh the adaptation over networks here i wa

i would like to number size

the networks

uh can be also uh

you can be a static and that worse

and it can also be a mobile on that the bricks

here um

we're more uh for the mobile L not uh a more well that works

uh every node in the network and moving

so the um the topology of the network it's changing although

a is change always

and

so

here yeah so that will yeah uh introduce some very challenging

uh things to the are written

so the first the

solution is a centralized solution

and

here are you need a a a a powerful a fusion center sitting on top of the network and it

a can uh a connected to every node in the network of work and eight uh i could the data

from every node and the put the all the data together to perform the adaptation

and we note there uh for this kind of a solution

it's suffer from several or uh drawbacks

the first thing is that you have a

power centre which is the super node in a work

so it's a suffer from uh um faders for example if you if the power centre uh used the

a fusion center is down

the everything's go alright

and

another drawback is that it's separates from the random being figures

so if

and one of the link it drops

you you do some node

and you lose some data

this is not a good

a um so we may think about that distribute but the solution

i one possible way is the using the uh incremental solution

uh the increment of solution

eight to uh update to estimate you a sequential manner

here for example look at this speaker

um let's state the

no the one will start uh estimate

uh uh a a will start uh

update

so it first uh it will use its previous estimate top you i as one

and the use its own data to update at this

uh estimate two

i so but uh five but one common i

and the forward at this intermediate that

estimated to know the two

and then the no to to will

but joss to this uh

estimated according to his own data

and so a and a well pass the estimate to the next to no

and so

after node eight

the last the node node the it

uh improve the um

intermediate media according to his own data

it will pass past the

uh

the resulting uh estimated to the know the one

and which you which we will become the new estimate of W i

alright for this kind of a solution

it also so

that the good point is that it does need a power a powerful

to rinse under this is good but it also suffer from or drawbacks

one drawback since the

um it also

so for from the the milling feet

for example if

you uh

it's this

link

is that a maybe you can find another way or wrong

but if this link is the

then you cannot do it

another drawback is

if the network is moving that topologies

changing

then

do in the adaptation you need to

repeatedly calculate or a a cycle through the network

on the fly

this is the not trivial you know that before it is it's uh

and P hard problem you need to do it and usually it's some very very hard

alright right so

we may think about a outer solutions one possible solution is the diffusion strategies

here

a a lucky likes shown in this figure every node will perform

two things

one thing is that adaptation

according to its own data and the thing is that exchanging changing the

uh inter mediate estimate with its neighbours

two

uh further improve his own estimate

a every communication is the down in the local area a so uh it doesn't consume a lot of energy

and uh

usually just all with them is a robust to to the run them three adding feel we be queries

if anyone of the link it's down you can always

and you you you don't lose any noting thing the now work

or and so there are two

different kinds of algorithms one is the called at that then combine at C is

strategy

it's first the perform the adaptation

then do the combination

uh another a one is the

uh combine then that that C T a strategy

uh just

for first the perform the uh a combination stuff and then do that adaptation

i here we and the size that win uh used in a complex combine

a combination coefficients

uh to guarantee the compared

right so before we compare different algorithms rhythms we need to be will a weird of a two important factors

uh of the performance of the uh a if order one is the convergence rate

and a an otherwise the state

a i mean squared error

that's how look at

the speaker

so this is that you learning curve of uh and the long

uh adaptive few adaptive filter

so

basically it you can't do by the the curve into two parts

and uh one part is the trend in the face

and the other part is that steady state

uh in the trends in the face were

interest in the uh how fast

the uh this curve for drop down

and in the type state or

interesting to

how much errors to remains in the steady state

so when you compare different rhythms you need to um

be fair

uh

for example here in this work

because course are more interested in the steady-state state for formants

so we fix the

uh convergence rate of four

every read them

so that means are pretty algorithm we have the same convergence rate in this that in the trends and fate

and the way we compared to uh steady-state mean-square error

and

to a a simplified duration and

to and high light a uh i'd so we use the to note networks

it's simple called but

it it the the

uh considering the um

uh uh the and uh

uh

and the

oh of course that the to note

that the that that that an a work has to knows already use it a lot of a reach and

in interesting dynamic

and

it's easy to uh analyse

right so let's have a

a can't the uh algorithms for to note networks

so this one is for a at C and this one for C T

um the of uh is the combination coefficient of four

at a four

inter mediate to estimate from the one self

a

it's here

and the to

is the

combination weight coefficient of for inter mediate

as to a from to two

alright right um

so after some a a considerable

how our job or a

you are

uh we can get this too close to form

yeah mse

i expression

um

this is the a network average yeah see you we should define in this way

and we can find out that

this the mse is a function of the combination coefficient alpha and the beta

so

we can do some optimization over this two arguments

to minimize this to you messy

and the result is shown in this

uh slide

here we show that

a after some comes out larger bra which is nontrivial trivial

um we can show that this combination weight i'll for you close to this and the bit i close i

one

is done optimal one

and this combination rule is

a a co uses coincides with the um

i i'm true

uh in the digital communication area

which is a for the uh

the rake receiver for this we me

system

and the in this uh two

optimize the the uh combination coefficients backing to a you sees expression

you are get the uh minimize that you man C for a key C rhythm and the

minimize the mse for C

or with them

here the role the uh role is the uh uh

i mean and the commerce convergence mode the for the diffuse are out with them

and a calm eyes the uh

re sure that

the noise of variance the for the two nope

and for D block are a mess

the first thing we need to do is to normalize the starts size the for this are rhythm

bic queries

uh in a texas i so we can show that

if the step size it's very small or

they you block our mass can be uh approximate as uh incremental a mass

so for incremental or mass we have two consecutive tape adaptation steps in one i i iteration

so to current the same convergence rate

we need to normalized

the step size in this way

and the yeah the E M S this are with them in in here

and

the role prime

is the common in the convergence small the for this all words them

and we can have a look at the

if the step size the me is very small this

term is dominant the by one minus two meals segment new square

and in the previous vice we can see the

the dominant calm uh that mean and a part of for this convergence jensen

mode

it's a

also one minus two new segment

use square

so they're almost the same

this is the uh a yeah messy you for incremental or immense

um

similarly we need to normalize the step

and here we gave the revelation to show that if this

stepsize it's small enough

the incremental our mass

and the

block are our mess there are almost the same

just plug in the

equation here and that you can or the high order more terms here

you and up with this expression

and

this is the mse expression for the incremental algorithm

um

here we also propose a we also uh put that the standalone there are mess here

and so there there's no cooperation between the two nodes

then we were compared to

the yeah mse performance uh

with with this over algorithm

also

so this is the uh uh results of the come and

so have a look at this highlights part

so the the optimized you say

it C are with them

can

is a slight will slightly better than the optimized the C T A a and is better than the outer

three algorithms

um

just as shown by D uh theoretical results and also fish

thought uh demonstrated in the simulation

okay guess so this is a for the network every G M S C

a here we we can see a team optimize at scenes the best a one

and another interesting comparison is we compared the individual yeah ms

uh yes the mse

of these stand the long filters with the uh

a diffusion

uh as algorithms

and the result shows that for optimize the if you C

oh of also of the two nodes kind reach a

uh yeah messy you which is less than

either either

one of the individual

uh future

oh this is some very interesting course

this means

even that would not

with the lower noise level can benefit somehow from the information sharing them is the bad and now

this a um

this interesting interesting be course

um

we can imagine that every node if if the node a selfish

it

eight only want to a a when it can uh core parade with the each other or with other note

it want get something from it

if it can a in a from the corporation it well not do it

here we show that

if you can but optimize the combination weight

then every note about

if a the from the corporation

that means

for example we we use these are with them

to model the animal behavior

i think it's uh this is a kind of a a reasonable

a to do it

for at a uh them you need a sum a condition to

to show that the you could note also

a a if benefit the from the corporation

this is the is simulation results

so

first let's have a look at the uh trends in the face so all the are with them have the

thing

uh

convergence rate here

and uh

the optimized the at C and a C T A you can reach the lowest yeah mse

and uh

is used is a slightly better than it

uh it is a it's that slightly better than C T A here

and uh it also shows that

a a block error mess

and uh

uh incremental error T have almost the same

uh status stay the performance here

and uh the worst the one is not corporation

as expected

the simulation per foul he shown here so we use the

uh a few order of the lance ten the step size is uh a point zero or five

and

uh the a noise the and have for no the winds point five

the variance

for no to is a three

and uh we also assume the whites

uh regressor requires or to not there uh the power of the regressor is one

and

with similarly to two thousand uh iterations and over a uh and average the curves

over one thousand file

a here are some references as we can see that

um by using that to field and algorithm we can model manning animal a here it behaviours in the nature

for them up back to maria

D honey bee

and the uh lies

and uh feast screws

and also we can use the are with them for they uh

uh

a me radio radio

alright right so i'm gonna down here

i guess for the gaussian case that you use and use simulations to mikes sense that this maximum margin come

component thing and this is optimal

um oh one if you could make any comments or you done the thing with um

uh uh be tiled

distribution

so that was to

uh

whose what we have done in i would scale in that you always get something from taking into account camp

the bad stuff

but in some more gas in problems uh you the rules to

yeah what this war filling stuff for a right we've know

i

i

is

i

i

deletion i D the message of okay

and the idea here you have to notes one has good noise the other that has bad not at each

one of them is to estimate some channel some unknown parameter

if they do it independently of course T was uh and all we get it was estimate right

now if a a of them to a but it let's see using diffusion

we expect big the bad not to do back to the "'cause" he's getting access also the information from the

good no

one can close it from ellis is is that the good little also do but

even though he's getting bad the information from the but not of "'cause" so that's one conclusion

not to do i have this expression there was no assumption about a gaussian at of these simulations actually good

is also cost and

but the mse expressions that C live

do not the singles and

to so this was not cope was a but the other one close which is very interesting from the table

you go

to the table is

what if you take these these two not state the they and send it diffusion sent

i a sense that can do block lms that only complaining lms processing

so if you just sent that can do block lms

well it do that the then

the this T do to the and the and the and the not to that they okay

and the on set is is actually diffusion will all the for even and diffusion solution

but this is counted that into it right

and then you might say well it's to just send that can do anything light doesn't it just implemented efficient

algorithm a diffusion sent

suppose to can do that but then what's the point the diffusion algorithm can be implemented in it see that

it man

okay

so that you know that it still call this this call he it is a is a of just a

all that was diffusion you can help the form the block lms solution

that implement in a fusion center of P

but the expressions that he did i they do not assume cost it to but i think in the simulations

are showing assume probably and yeah

thank you

oh

you want to you or a region you want a lie for the a be the

coefficient efficient

you do rhymes are using the study the uh

result

could that the uh N E

a better way to derive a adaptive all on the fly

coefficients

sure thank you

a

a good question like i watch is doing is the i in the optimal weights

for optimized the steady-state performance we have another it but actually a the published

it was presented then i cast us to use that and on they paper that B it

would yeah i i i i have that on the fly and the nation we find what the optimal weights

i

and at that it on the fly yeah we have we have done

yeah

i

yes you you know to like this

yeah i i can see we will expressions but do these optimal combine yours require a local information only your

you're required

a it's a statistical profile of an able to find them

uh

yeah have

here the optimal a a cool

combination coefficients need

need to know uh

noise per for L per file course the network

and

uh we uh we are uh

here here were trying to find out some more with that

you can estimate or somehow

to know a noise profile cross the network then you can come up with this

oh that's a good question this expression as you can see that the optimal coefficients depend on the noise profile

in the network okay

so this is it performance to it's that it's that time to tell you what's the best you can hope

for if you knew this information

in the article i first to before the one would you at that is coefficients on the fly that is

done based on a thing as they that that you have that the a lot of there's everything is estimated

on the fly yeah

i think we should move along a get to be fair to the law at to add to the last

is the last speaker here