okay okay welcome

my name's eggs and a are and time

he of one from germany

and um

to they have give a talk and my based estimation of the wrong to us

a a stationary process

not

innovations

and uh and the talk

on

a

as follows five the action

or i give the motivation for

problem

um

sense of method is based on the conventional map method i will to review and

mention that that and and now explain the modifications which and necessary

to extend

the

method

for a noisy observations

talk will be computed

a summary and all

okay to start the integration

start to uh with the

process which is

described by a white caution stochastic process which is you know by your and

and uh and

a is can be interpreted as a time index and here we on the right hand

an example which years

a samples of this process a a given and dark red

and uh

what

you can see here is that the me

and the variance of this process

a a very with time

not the problem is you are not able to observe these samples of

a process but you all only you

able to to a noise samples which are denoted by the head

and uh we assume that gives a efficient error or

is a zero mean and uh

um um are only

a time varying variance of may

be strongly time-variant but you as but you know the variance

and the question is now

um um how can you um find a simple method for estimation of the time varying mean and the variance

of

process which you can

only observe zero

and noisy

which

can of the only know samples of

okay

and the yeah D is uh we assume that uh the mean and the variance are still time varying and

uh

we want to exploit the correlations between successive that's of value so use to

and uh since

this we do want to exploit a priori knowledge which we again from the previous of the divisions

and so um

for this reason we use a a maximum a posteriori approach

based approach

and uh

um

since uh

this will be the uh basic

for me

um and method which we propose i would first review on this

which everybody i think

we know here

for the first uh

first case we assume a stationary process that

parameters don't vary with time of that being set a fixed

fixed mean and variance and we assume but don't all

there's is no noise and the visions and the concept i think everybody knows

you have a some of the observations

do you want to we and and uh

you have

start with the private yeah which you gain from these of the patients and you try to prove

estimates based and new observation be of plus one

and uh

okay the concept does then you just uh compute some you estimates

and uh uh you actually structure the maximization

we have P yeah and uh i think everybody knows that the this is composed

uh

of a private yeah

uh which

um

actually uh gives information from the and you get some

and an observation actually

okay

and not what are components of

as where yeah if you have a cost an observation like of course

and then you have to assume a can get prior art

and this case

something like a product of an and inverse scale

he's square or distribution multiplied by uh

um

caution distribution and you have for like the parameters

two of them are location and scale me

actually be represented

three of uh

so do you have gained from the previous observations about the mean and

same you have for the

and for variance you have to decrease

freedom

scale which i

by

sci and

a on the square

and

now then you get some that roots for the drama us actually you increase

uh the scale

and the decrease of freedom but one means you get one observation more

and uh the

you estimate for the mean so wait a which

from the old value and a new observation

the weight

factor for you to uh for

observation is

inversely proportional to the number of observations one

and uh a similar expression for the said don't on to detect

actually

and now when you have a computer these parameters

you can uh

compute the you a maximum

the ticket and

and you get

the estimates for the mean of the variance

the standard approach okay

what happens now oh okay yeah

example

and here is um

example for so

process process the in variance a

chosen to one and you have an example of

uh a five hundred samples

and below know

there are estimates which are uh uh you um which are obtained from the

um

method

and and the right hand side you see a posterior pdf which

you're is shown after ten observations as you see a a a a lot ten observations

uh actually you

can can

so actually very flat

and uh the centre years

quite white

uh quite

uh

i i don't the since away from the the design

uh a which it should you one one

and now what happens if the observations

a increases

then uh them

distribution gets more P key and

gets closer to the

is i point

now

see that you get to much more more sure about yours

okay now what happens if you are not signal process

now uh you still have not streams nations

what uh the parent i size to be time varying and what you can do is to introduce a

for getting in

and keep the degrees of freedom

from be increased

means you can assign a constant value

to both of them

and uh a a it means that

you you it's you

actually use

information of and last observation from the past

and this value and of

of process

shows and

uh a to me to to between estimation accuracy and tracking in G

that means if you have

got a a a a high

uh value for N

you have uh

very good estimation accuracy but the tracking and you will of cost now

okay now we an example again yeah

yeah

got process of with a time varying mean and variance

a functions for that are given here

the and you are

you an example with two thousand samples

and now we can see

you low

you estimates for the meeting on the left hand side and the estimates

where on the right hand side

uh

you can see that

actually but

i go and

the the estimates what variance in fact that more course

since

second or or or uh uh statistics but you're here to be estimated

what happens now a few chris number of and then

the estimates

get most move of "'cause" but you to a since uh the tracking you

uh

not so good

as

a a variance can be no but since to

um

a function for the variance

uh

uh there is

um a very slow and time

now what happens now if of noise of the base and that is interesting case and know what oh

what kind of of modifications must

must be done

is not what what happens

oh

but

case of not it's of patients

the like to it

changes

and

you in see that you have no uh

um

at to the variance

of the you the noise

uh

at the corresponding terms of the likelihood function

and to a problem is not that a for this like a function that's of course not gonna get prior

since the the like to function

a factor

we have the variance of the observation or

is an i i is to and the spectre and there

skunk you the prior distribution

now what happens

here are just apply method

a without uh

considering that

error

and you will get a bias

and

a you few an example of a few once

mean and variance again

and uh the uh and observation or or

is is not a to be random and to

as is a uniform

draw from

this interval here in the right order or and that was a

actually a scale science crap function

and here or let's that side here

oh such a process and dark right again the noise free samples and

do not be noisy observations

and know what happens you use you what inside

uh uh what do you the algorithm actually estimates

is

um

um

and very biased since actually yeah a real tries to estimate

the

variance of the

a a couple of uh

process of means

but loose and but buttons

since

uh the variance of this process

uh make a flat rate very high

a time

the uh is

actually is not

a reasonable solution

i of the variance is high the or system

no not "'cause" all do of what's

has to be done

it to consider the observation error

oh at uh um comes as

two components

first one is uh

we proposed

first find a good approximation of the maximum

first you P yeah and the scale parameter

and the second step

we have proposed to approximate the posterior pdf

with the same shape

right I

he's that the maximum of the true posterior and the approximate

steering must match

and

we have

assume the same degrees of freedom from for the steered yeah

and the but and the approximate

posterior you have whatever that means

now a come on the first

a point

yeah i have

um

the true posterior P that looks quite complicated but

not

think you're is important i will

so you bought things here

and principle you could uh take this

as you if it happens to a local search of course

and um

as functions but

this would

on the one and very computationally expensive and this

point is that

a a you know

i could compute the maximal this

a it would

have no uh

clue all

escape from

now comes

a whole idea

if you look at these expressions uh which i you and colour

they were sampled you expressions

a a of the prior you have

and the prior yeah these expressions are constants and now you the expressions are

actually um

a functions of the variance

and now if you look

at these functions

for example

at the scale parameter for

for the um

for the mean

see that uh these function

they they between you probably tell

a couple of and and uh the new problem car and that's one

and uh

same same uh

holds for meeting

lies between me mean

and

you

now all idea was motivated by the fact that own

those values

uh

which are in the

vicinity of the true

uh variance variance uh since

the are

prior video will have a high values and that region

and for this reason

proposed approximate

these functions

oh

the variance by constant

by applying in the

variance estimate of the problem of the

um

process of of and

for a from the

a a time and

and

i do this we get constants

for yeah

um skate around at all in the mean out

and

first uh advantage that we uh what the maximum search

a in

is that we

get a scale parameter

and you can see you also what happens if we do this

for example look at uh

channel

a here

uh a if the observation error is very high

and

you know it that would be done need to but this observation error or

and

the new estimate actually will

E

equal to the oldest estimate that means

that from a very no it's it's you can't learn you think that you

stick to the old value

and what happens if

it the observation are or is very low input put there as to the old to estimate here

then

uh

term maybe you can not do you get your

and

expression which is equal to one and that means that you can learn very much from this

H

okay

okay and the same of cost a holds for the mean

found that the mean

and uh the scale parameter

we in the second step um

we find the maximum of the post your pdf with

respect to the variance

and uh we have shown and all pay but that this is equivalent to finding the only root of for

for all the long you'll and known into well

and this can be uh done very easily you with a bisection method and uh

later later vacation of a new method

very

you you done

very simple and computationally efficient

actually

okay and uh are now we come to

a second step now we have found the maximum of the true posterior and we have

found an approximate of the scaling parameter

and now

we

approximate this

a with a

with a a P D F which has the same shape as a prior in order to recursively applied met

and

for this

we have to choose a hyper parameters

two

first have parameters

which are already a which referring to be in a or time and

are we have

to choose

and the

parameters

sign which once in a while

observations actually

and we set it

uh actually to the number

a couple i am plus one

and

or the setting we also get

and

this scale problem at a for the variance

no i just an example of the true posterior pdf only that and side and them

approximate posterior pdf

right hand side and

i do not know if you can see any difference

what's uh

the that yeah is

the the rotated

to the right hand side here

and

this year is actually symmetric symmetrical to this axis yeah

but uh

i want to show actually that are quite simple

now an example

um

yeah again

process with the

um a constant variance and a

the observation errors again random

and

we have a a comparison but be

a conventional method and the

proposed method

on left hand side

use you first

a comparison between the mean estimate

estimation

yeah the could mention that of course

estimates the true mean

since the bear a sense to mean of the blue samples of cost

the same

as that of the dark

right samples

since the me since the was a vision error is zero mean

but

see

that the uh propose not that estimates to be more accurate

and

same same uh for the their an system it you see that

is no why is here and that the variances

actually

estimate is quite accurate while here in the

can mention and that estimation method

C

a quite by

now an example for nonstationary process

now we have a

a time varying variance

um

we have here an example of can with two thousand observations and the observation noise is not random again

and

yeah

a the right

ball of the right well as yeah a controlled by a factor of C which controls the maximum variance

a terrible

here

a comparison of the you

performance on the that the mention a method

this see that do the estimates fact a very

hi and so on but

a method yeah

more more at here and

again here is he

for the variance estimate at very

by a very high bias for the conventional method

which is not you

true

for the

but method

proposed

um

and

i have

just to slides

i think

will

you okay

um

no what we do uh what do you that to so we measure the root mean squared error

right a right part of the

interval for the uh

observation or or

and what you can see is here

the um would be it's good as for the mean and the variance for

a conventional and the proposed method and

but you can see use that we always

what was that all performance is always improve compared to the dimension method

and

that to improvements

get more pronounced with increasing use of observation noise

oh come

fusion

we have

a an approximate map approach for the estimation

slowly time varying parameters of not stationary white gaussian random process

and we have shown

but in yeah

um

the case of absence of observation noise is equivalent to conventional map method

but

in

presence of observation noise

is

proved estimation accuracy

and what is important that the computation that

but the only restrict

showing this function is that

variance of the observation error has to be no

and this is you that

papers is that

we have to analyse the effects

what happens

a if you do not know you um

yeah it's of the observation are right exactly but just an estimate of

but i i i can say that uh this method will not be

that's

it sensitive to this

now

future future

thank you remote real tension

and for a couple of questions

yes one

you process

why

sure

i

you

gives

uh i suppose that this question would come

uh

um um so far we assume all the cases

is just a a just a a method uh

if

we have these assumptions and we can uh we can for a give some

uh

some method to estimate the problem

may you that might be

might be an application for example of you some uh

sensor signals which and noisy and you have

can do all the observation are a which you can expect

and then you

um

i able to estimate something like a mean

uh

like a bias in the mean or something like is

this week an application but

we do

we did not uh

find

and

a calm concrete applications

and also can i guess

i

i

like

i

oh

cover

and

oh with

oh

uh no we didn't uh

didn't

and nice

with

with connection with home more more

but

yeah but

yes

uh no we didn't

um

you mean uh

you with to the proposed to compare are you performance of all with them with

which one

which what

okay

of course

uh

no we have measure are actually you you do the true accuracy which

uh with the measure like a lot of something

like that

we just uh

so that this method works quite well and so uh happens uh and a last

yes the the performance and

this kind of a metric

i Q

post

okay thank you

a standard speaker