thank you

i

and the money

uh my name is that

can and and i like to come to my is should issue

days

section

they do not think my

uh i'll start the

a a a a a you this description of the problem that we have in this work

that i

i the might do some pride my one prediction

using

uh uh have they makes based on its and

sparse

estimation based methods

prediction

yeah know

you to you are new approach

based on a naked my

they should next

yeah

direct application of

you

and

so that no matter

to

which pretty

i was to some experimental results that i

finish my and my presentation that we've come

so in this part we actually at the problem or for or close to image prediction

so when we talk about a close to image prediction maybe be of the art still is that

for inter prediction models

so they

uh this uh this prediction

middle

is actually

uh

that are in a home in is region in a an image or or or or in

i

it also sometimes what cost to as the orientation of the models

a a like you want to

so

a also is okay

the the interpolation fails

mostly in in all complex

we use and the strike

so you know that

can

this kind of a that search is there were lots of that which and based algorithms and inc

as an additional mode in a stuff

for

uh or or or even sparse approximation

based can be used as a generalization of the that made you know

a to giving this brief information my like to remind you based on

for intra prediction models

uh uh is you may or it did not that there are two uh two that of the prediction system

by sixty L four by four and sixteen by sixty has for prediction models

and

for by four had my

models

including yeah one P C and racial

i did a simple is just the way that place or simply to it's the piece of values

which are already included

uh

a a big one be

yeah in this but a space maybe paper

and

and we talk about that reflecting is also a very well known a simple algorithm you know uh people use

the define that play very

a cool

and a close to B C of the target well

and then and like to find is uh

a in a coarse search we know and uh

and that the the minimum distance between template that K

yeah i a and into the

allows us to to to the setting E

you can they work that is

just copy that they too

to the piece of but using target little be pretty

as an example i would like to show you

the time uh house middle

yeah

this is just a

is that an additional more in a stuff to six for

and uh result in up to a level course bit-rate saving

and the the idea is simple a a a a lot to be pretty the for by four block is

divided into a

for some some and that that meeting

just uh

a a a a a a light on this sub looks

you know that the the the production of

for for problem

i don't using

and some

multiple but there's that everything them using a a a a a lot that like of that but it's uh

a result up to fifteen

it is

fifty percent

receiving a saving unit stuff

for

we simply V

into into used to is sparse prediction of a sparse approximation based

a algorithm

so instead of matching mating template tries to

combine size

a a meta data

uh and yeah D we calculate in coefficients

which are okay you think that that's five three is fess up estimation already

we selected

i that used

and that use the same movie

i

a make it is

target

so before going

i the details of the formation i like to him just the it to in addition

so so that we have

now define a as so it's been no and and i got a should have a support C

and uh are

currently

be but it is the

me

so i just

these values in two

i i uh we just like a sample values

and that i

we can

uh the the vector don't at piece of C which is which was was the support region C and the

since that it it what is that you know the order of values of the block to be pretty the

P

and

and you put these two values and a piece so

and the

we we you have a and at i which is a matrix

and this time the or or or the image patch

in a in the court to search window and put into columns of this matrix

and then be again a compact

the this metrics into into a it's of C an A sub D which

it's up C corresponds to you

sparse it's but the location of the

the support you can see and

and the other one score is supposed to be

the block to be predicted

or one we have it done by to sparse prediction

so we we have a

we have a a a constraint approximation of support region

you have this constraint because

uh uh you try to approximate the template

actually a good approximation of template uh you

so sometimes we not the lead to a good approximation of the look to be pretty the

so what do we do is that we we is a sparse sparse representation uh algorithm of the algorithm a

greedy algorithm

and

at each iteration of dog within be to try of this sparse uh big doors

uh

think that the tuition and fee check if is

uh if the if the block to be predicted to do the unknown low uh

approximation is good do not and we right to my

using

a limited number of iterations in this uh uh in the

sparse approximation but

so in this case we we need to

signal this

this

select a sparse to that which which is the optimum reconstruction of the unknown but

B

and that i no is just a a a a a just a by by multiplying a a corresponding matrix

with the the selected optimum

uh sparse

sparse like

so this was just the from the T to true so i would like to

speak about an hour a non-negative mutts like to this algorithm is actually D B

a

it's a low rank separation of of but uh i i of data and it is a proper that

it's it's always a naked the D

and and the

and it's it's very useful for four

that yeah for physical the that for i interpretation of the results of the in it but but is an

algorithm and

and this which is are using this in

a implies that action of that the mining and noise

remove remote locations

in other words

that's up to that we are given a non-negative metrics

but the matrix E

and the

B try to find it's

medics factors those a and N

and that the the usual cost function of

and M actually the

a it it didn't distance

with the the constraints of the didn't the elements in the match this is R

are non-negative always

this is this is a well known problem and it's sold in two thousand by a by lee and and

the it is uh uh a than at multiple multiplicative update iterations

and starting with data

no um

randomized and non-negative

a image you listen of a and T a and X and a a a at the knitting update the

conditions

it's true that

the this the and it's good it the distance is decreasing or each iteration

uh

we can we can write this uh

a cost function of and i in in a vector of form so that's suppose that we have a a

vector B

and which which needs to be a but to write a

yeah

i at least a a and a vector X

oh it

still value the yeah equations uh a real work with the with this kind of problem

and

are

a i i do have here is to feed a and B be is actually

because it's the data

which needs to be packed

but i be fixed at here

so that me just remind you what was a

a a is the T

the text patches

extracted from these uh this course so it's we know that

so it's a a

and B that they try to find that and i i've uh a representation of the support region

and then the be approximate the unknown block with the same power right

or

but a more of for a for like this uh

this iteration into a a a a a a a and so we just use a sub C and a

subset which corresponds to the template and the dictionary for the template

and since we we fixed T

dictionary a C

we have only one

it to a a

a i if shown that for i

so this uh X it's start the on the initial right

uh a non negative

values we and it's it's rate until uh a it to the final iteration number or or or or or

or a a condition which is that's fight by

by i

and uh did did the predict the values of B are used they get the it is a using D

the vector

vector X which is the

the final iteration of four

this out we didn't band the the use the

the dictionary which is which corresponds to

but look to be pretty

a like show some experimental results these are the trivial result that your date

a you to perform and and the for barbara

for come amount of be test are algorithm it the input in can present to order on matching pursuit and

the template matching or

and uh you can see on the top of the nmf algorithm as the right the

it and in terms of coding efficiency

oh results uh uh

for a reconstruction

or for for of the first frame that we use that and the again here you you can see E

D

the the degree in the bit rate and the the increase in the P that of values

greatly improve

but not to take a look to prediction on it not be function

as a as you can see the

the prediction is is that was supported

this is

this is why the we B don't have any a constraint on the on the number of just to be

used

it you do sparse approximations we fix the be

be value D number of but

and and have it made one to one that is used to for prediction but in an F

B we didn't have any

any constraint

so starting that this observation be just the impose a sparsity constraint on image

and a a constant is

just to L O K K I just K non-zero elements in the sparse vector

and it again can keep track of these sparse vectors to what my prediction as in sparse approximation but

and if we if a again from a like this it

a similar to sparse approximation algorithms but prediction algorithm excess

we have a non negativity constraint on the

on the corporation

and of course you data and that that do

signal you the the value of K select

to optimize uh of the number of by

and the the to the prediction is a to

i

a the the signal was that that they can in the same manner as the as a sparse prediction matter

so here

really

since we used

that used the computational load because of the sparse the constraint we we decide to include use

instead of using one the one template we

we we introduce minor models

to to select the best one as

in

it is to you know to compare with they stop to six for because they stuff

two six four four by four intra

and by modes

so we just decided to have nine minutes and the compare with they started

for prediction

and and and here uh

since the V set a well these step this we need to signal it

as an integer value to the

so i would like to show you region

which is extracted from for an image

uh and it's very low bit-rate prediction

so you can see this is a step to six for prediction

and sparse approximations and the sparse nmf algorithm prediction methods and

you and you can see D the artifacts on sparse approximation on the age and the and in and uh

there is no facts on the predicted image

and the

uh a image which function from of from by about uh and uh in that it takes to region and

can clearly see improvement on the visual quality at least the it's improved by

by this algorithm

uh final of a are T

i a compression compression results are which are are compared to a step to six four and

uh and

a a sparse approximation and them man

for about about five four real images

or B

we just a to it is the sparse approximation at times and B cheap

i J X

uh i i'm sorry

to as a

for uh or nmf algorithm so it's

just that

a two one to eight to the number of buttons a varying from one to eight

and is for by four block size and based prediction is selected by a a a a a function

so are the top and the road that curve is and F and

uh the blue blue curves are

a corresponds once to

a course one to

sparse possible was approximations that we first

course when they that for once to

prediction modes

so the conclusion in this work we just the introduce and

no image prediction mid which is placed into in but at that time instant the detection method

algorithm

and it's the constraints it even rubs better

and this can also be a to to image inpainting what was lost can and applications

and there is a final remark a it can be

this all them can be an effective alternative and the but it's compared to other metrics as like this guy

before and this

this presentation

and i would like to thank you for your time and

you have some questions

i would we happy to

that's

but i have a questions

you

a is group sessions or just step up to the come from

i

have some questions or

thank

how is the computational cost to the other math

uh the the computational cost compared to a step for is

uh it's high because in the in a is that was used four

it's

uh the the P there's the interpolation prediction there's are defined before and just the they use these few into

a

in in in into the algorithm

to to interpolate pixel value

uh

but that's why of them work for for texture regions and complex

start so

you know to based technology those so to

two

to them a some complex algorithms to

uh to overcome this lacks in the

in the image operations so

yes

i

yeah you in terms of complete uh in terms of computational complex the and compare it is not to for

it's higher than

it's that is for but it's sparse approximation all

uh

it's it's as the same

i can see

a question

i so when you are as your uh

a questions for the yeah and i'm at a good sparse

the

or are you have you at that at the

for a for our

three

oh

a a a a that the sparse representation

for a similar except for that you have a constraint to X

square we equal to is you know

but there why do you think that you met

i or if you put the constraint

okay okay

question

actually

uh uh in sparse approximation method

if in each iteration

you try to approximate the template

but it it the post iteration you find the

the highest correlation between the

we in template and the atoms in the dictionary

and then you get it is usual

and that at the second iteration you to right

you you you process on the residual image

it is usual of the template okay

and

you know in a in a in in the special domain the template and the the unknown block are correlated

to each other but in the residual the they are not correlate

so first of all the

that that's why uh for example a like a quick meeting coefficients and sparse approximations

might be very good for template

but it might be very very uh you know uh i E

it my

a can it might contain a high frequencies for for the look to be pretty

so since you you try to

uh i could just

edition

just you use but was used but probably

the patches instead of instead of using

correlation uh correlation be a correlation coefficients which are a it on the residual domain

and

if you see we don't uh we don't use of the residual information in nmf algorithm be just use the

patches which are very close to apply

uh i i i hope is

it's clear

oh the steak house