yes and and one of the lucky you guys will have a full time results position

i don't have to teach

and uh

the first or or is uh for all so that one of my colleagues

in the level crossing signal

time

and both of those that device

yeah that's right

and uh

this is

paper is the side is result of

it is it's

H T

okay

the previous papers were speaking about right

reasons

problem

we will be speaking about seven or problem

how can you

directly obtain

the iterative algorithms just like in turbo codes in

durable

iterative decoding the bic and which will be or

from uh maximum likelihood

estimation

uh

this week

has been

of interest

since many years

uh that are being and then is is of iterative decoding are being

and with the

of you six it charge density evolution

that has been quite and then is is using factor graph belief propagation

interestingly enough

very

basic understanding of the process was obtained using information geometry and in fact it was a first that and to

and there also so a first that's using an optimization

okay how could we obtain an interactive i don't even for performing something close to maximum likelihood

using optimization

this is exactly what we would be doing

so that that that them the is first

just so that such so that to do not

for T

to soon

and

the that we would be writing

a maximum estimate

then

we would translate it

using a very simple to that would be

three three

in in the a very simple tricks and very simple estimate

in such a way that it can be

uh the done in terms of optimization

then

ooh will show that that um using a single approximation

we can obtain exactly the algorithm which is used for decoding bic C N

but

and this approximation will be used some so okay

three K

and uh uh we gonna use this trick to check there is that of bic and

it be is efficient or not

so a meaning

from a that time to really understand how you could be a the algorithms will get information about the efficiency

of the algorithm

so that cool

so it is the

situation which would be sitting

we have a convolution and for the

information bits here

the code words here which are interleaved

map to symbols and then sent them to the channel

i would not be

specifically working with specific interleavers specific but things or whatever everything

which is a can in the paper

is is valid for a a a a kind of interleaver or any kind of symbol matt and of his

see the performance will be different

and that it will be seen right

okay so

in addition to see that will

the close here

i

uh but let us here are vectors

okay so because of the thing with vector is a and an individual bits

okay

the binary messages C and go to do it's

interleaved sequence

and uh okay

there's

okay so the maximum likelihood sequence detector

is like that it's of use

it's very basic

uh where

and

you can either

where where maximization or over

you the the binary messages

or

the it works

coded bits

provide in that you can

every combination of the coding bits it's which do not don't to the goal

okay this is a a a compact notation for saying

i was zero is the indicator function of the quality

its value is a one for a code where its value is zero or for a a combination of beach

which does not be that belong to the core

okay

since

this is not really easily manipulate it is from the optimization

well we use this is very small a

which i the been you write the used but which has a can be found that's where

so meaning that you can maximise a there on that

oh

and

the value of the probability P here

uh

so so that the thing the maximum of the argument of this function

that are then if it's up for that

one

is that

this probability can be fully factor right meaning it's a product of individual probabilities for each bit

okay it's a matter selection and most than than anything at

and the second advantage is that this this problem T is

continuous

so for an optimization problem is much and but not continue

okay

and you state this problem is on track double

why because you have

many need

in that in the vector and if you want to do a i

to compute

this quantity for every possible combination you a lot

what okay

but

this is

the the first request was to be cushion but the per bit second rate is that

in this

problem you have to kind of information

one information is coming from the channel

a because you you a measurement coming composed of the the information is coming from the fact that you are

looking for

can i did

this is a of information

and

and is probably

but as so okay in this for but here

we just fact i is it

in that and L

we take care of one of the information

Q with take care of the all their information

okay

and

the probabilities is for the front that was what will be working on a big margin

meaning we will have a and variables

instead of to to the uh

obviously

maybe we we of go very far using exactly this procedure

we can but to here

in this

process here

we do not have any kind of approximation we have no

the bit margin as appear

and

this

equation here is exactly equivalent to the previous one

this an approximation with have

in this talk

is this one

the bit much or and a of the product

i we pose but the product of be marginal

okay so so that the previous equation is replaced that this one

okay

obviously seems to be a coarse approximation

it happens

that

if you choose directly

the interleaver or if you choose to the mapping

it's that the too bad approximation

okay

okay

but

the coding now uh is that of a think is tractable

because the computation of the marginal destructible

okay

this

is struck but this is computed well

this is computed vol

and

uh

now we have a

is a a problem which is a

different for each bit

so we change the problem now

but they do the criterion here the bands

i'm K depends on the location of a

so that the average original problem

is replaced by a by you distribute the optimization strategy

and the only the problem

seven of and cost function

okay

this criterion

yeah

is relevant for the mac for the cliff be

okay

and

you it is shown that

if you do that for the K B

the solution must be something like that meaning that

if you

have a solution to the prime

that mean from one of the set of information i'm of a V coming from the other a set of

information

but must agree for the maximization

a

which should

here would be

for a whether this it will be

you you zero or one

okay so they have to agree on be

to make of this bit

okay

so i and that in this process context the that that a mediation of

oh C case

must provide an agreement

between the code or estimate and them

D mapping estimate the for the beacon position K

but

and that is that are independent

optimization the musicians

maybe you could do not agree implicitly for the estimates of the other

okay

so

and you will see in in the next few slides that

the track collaborative them

used in decoding of the S C N

is is exactly

the solution

all this kind of problem

okay

so that

i to know that i can't time is that i i only derived

bic and decoding and algorithm it to to use C M decoding a one

using

the single approximation

for a maximum you

not

well in need building a my it up working more on that

meaning that

if one not a little lower of the group it

if you if you if you will take

the um

of of these criteria

now

you are just

not

you can not have inconsistency consistency between the estimate

of the base for the values

case

okay so if i think has to a in some way

meeting

that

yeah

the if you taken by this quantity will be an indication of the agreement

between the coder and the demapper for the sequence

okay if you work individually and that it's

the most agree

but if you were on the one sequence

the kind of three

and thus there is absolutely no where

and this is for of here

if you do not have any kind of the error

the

provided by these criterion would be exactly the maximum likelihood

so this is true in the paper

okay so that's

given in words what is written here in equation

okay now

this is outlined here only

that's the we must come back to the individual criteria C K

if you just minimise these the individual criteria

you just

fine

that

this completely here

is is

it did not there

this be to here is something which may be look strange

but it is exactly what is computed by you the C G I E R algorithm

okay

so

would be

fixing one of the one set of the want it is and who will go to do that you interactive

musician

we have a is i mean standing station here

we that

oh okay pigtails

to the to some

three use a value

and

we compute

the next completely based on sept solutions

this one here

for for or computing you and this one here for computing have

if you just know that what

everything is doing

this is

but the or

classical in the are S E and decoding

and this is exactly what is computed but you be C J R are are greater that meaning that channel

decoding algorithm

it's a

a compact notation that you can numerically to check that do this is to it B C J are maybe

it's not that when one

if you provide to a C procedure

i to a probability is that it should be

if you compute the probably probabilities of each possible work right

product of these quantities quantities

kill

but of the words which do not

we don't to the code

and then compute the matching also a which is exactly what is

we and here

if you do that

you have to exactly at in the output of a B C J

and it turns out that

this quantity is which were they now by

matt's me maximizing some criterion are exactly

the web

keep the in in in

uh coding name the extrinsic information

and now there is no magic in

for graduating in six rather than a are plus to a it is

the

uh uh true since that

as and have group of the mediation procedure

so

to to i

we have an optimization problem which was

or to model

maximum likelihood

we can

obtain single first exactly equivalent to maximum because you like to a good detection

we have a an approximation which has been obtained by fully factor using this

the the probability mass functions

then then a so that model algorithm because we had individual

uh

quantities is which were optimized

through a distributed optimization strategy

and we can

if you know come back to the glue factor in the first the sum of all possible C case we

know how a way of evaluating E

yeah approximation was good or not

because if we write

the uh um

the the sum of the individual criteria we will not be able to check

if

the did not for and the decoder where the greeting

on on of values which where

for for the day

so that's provides as a way of evaluating the quality of the solution of the ica and it directly decode

okay we have covered just problems which

a a in another the paper

okay so that's

see first

okay what is new here

nothing but

the fact that we thing it may be

a B S C and decoding or even

but as the fact that we have maybe be a way of checking

if there is that is correct on not

and you can see here

the cost function

of the mixture means that of the global maximization

first

the let's say the bit error rate that number of errors

we are

known as K

and you can see that that's for C T V for maybe

you can see that the refuge correlation

it can be one

okay and

as another example i i

show you these kind of result

we have

but duration of sixteen grand by mapping the set partitioning

convolutional code this

five seven

and we are mixing

their use

E V over and zero from five to twelve db with a uniform distribution so it is a very complex

situation

if we choose

the threshold

indeed be equal to minus twenty here okay

so the bit error rate for the frames above the threshold

which are assume

to the uh

who

because they are are you are trying to maximise the functions are above a threshold it that's okay

you see that

above the threshold

the bit error rate it

quite stable but

if if you change the

the threshold

for of frames and of the fresh

you have a much higher bit error right

okay i it would be a is your here um okay

one here

okay

but

we are close to that

and

but that's a page

yeah of three G T frames

well i

there would be correct

is very small

and and what is as is the probability of first i'll a meaning that

the point that

not per cent ish of sequences which are to been rejected because

build a because the right below of the threshold

but that work correct

okay

but this

is the basis of the work

okay

okay that me summarise

what we obtain

iterative decoding obtained from maximum likelihood

we we had no specific assumption about the book things that we you a mapping or whatever

the is obviously will impact the performance

the in by the quality of the upper be here approximation which has been made in the in this

where

we obtain

clearly a that we should provide a extrinsic information between both

lot

rather than

a was probably is

and

we have a a we have a common just a do which has been a set it okay could be

know that we do use a go

and uh uh we have a

the process for a

evaluating the efficiency of the result

uh because of this and that is

and it's likely we not sure that

simulations are running to check that

that

B

most of the as we have in this kind of course G are not you good approximation but most of

them are out you to conventional to local minima

uh

which is

okay makes sense but we have to prove that and it

work current yeah and a ticket

and that

do you think the some kind of some as is can be applied to to about composition

could could you like to

to of current position

well well is it okay that this applies to

any kind a okay it's a toy example bic and here is that for example it applies to so you

editions

on which you have several four sets of information on the same seem

would you have here three sets of information in a a coat

on top of the M

this would apply also

that that's

okay it's quite generic

a

um

i you aware of any results from information tree which word um justify your approximation

no which i and i can send you your the the psd of the uh not to what was specifically

working on information theory

from a some geometry up like to these kind of work

it's tricky

and we could not really justified the approximation