0:00:14thank you
0:00:15i
0:00:15and the money
0:00:17uh my name is that
0:00:19can and and i like to come to my is should issue
0:00:22days
0:00:22section
0:00:23they do not think my
0:00:26uh i'll start the
0:00:29a a a a a you this description of the problem that we have in this work
0:00:33that i
0:00:34i the might do some pride my one prediction
0:00:36using
0:00:37uh uh have they makes based on its and
0:00:40sparse
0:00:40estimation based methods
0:00:42prediction
0:00:43yeah know
0:00:45you to you are new approach
0:00:47based on a naked my
0:00:48they should next
0:00:50yeah
0:00:50direct application of
0:00:52you
0:00:52and
0:00:53so that no matter
0:00:54to
0:00:55which pretty
0:00:56i was to some experimental results that i
0:00:59finish my and my presentation that we've come
0:01:03so in this part we actually at the problem or for or close to image prediction
0:01:09so when we talk about a close to image prediction maybe be of the art still is that
0:01:14for inter prediction models
0:01:15so they
0:01:17uh this uh this prediction
0:01:18middle
0:01:20is actually
0:01:21uh
0:01:23that are in a home in is region in a an image or or or or in
0:01:27i
0:01:28it also sometimes what cost to as the orientation of the models
0:01:33a a like you want to
0:01:36so
0:01:37a also is okay
0:01:39the the interpolation fails
0:01:41mostly in in all complex
0:01:43we use and the strike
0:01:46so you know that
0:01:47can
0:01:48this kind of a that search is there were lots of that which and based algorithms and inc
0:01:53as an additional mode in a stuff
0:01:55for
0:01:56uh or or or even sparse approximation
0:01:59based can be used as a generalization of the that made you know
0:02:05a to giving this brief information my like to remind you based on
0:02:09for intra prediction models
0:02:11uh uh is you may or it did not that there are two uh two that of the prediction system
0:02:17by sixty L four by four and sixteen by sixty has for prediction models
0:02:22and
0:02:23for by four had my
0:02:24models
0:02:25including yeah one P C and racial
0:02:28i did a simple is just the way that place or simply to it's the piece of values
0:02:33which are already included
0:02:35uh
0:02:36a a big one be
0:02:38yeah in this but a space maybe paper
0:02:44and
0:02:45and we talk about that reflecting is also a very well known a simple algorithm you know uh people use
0:02:51the define that play very
0:02:53a cool
0:02:53and a close to B C of the target well
0:02:57and then and like to find is uh
0:02:59a in a coarse search we know and uh
0:03:02and that the the minimum distance between template that K
0:03:06yeah i a and into the
0:03:08allows us to to to the setting E
0:03:11you can they work that is
0:03:12just copy that they too
0:03:15to the piece of but using target little be pretty
0:03:19as an example i would like to show you
0:03:22the time uh house middle
0:03:24yeah
0:03:25this is just a
0:03:26is that an additional more in a stuff to six for
0:03:29and uh result in up to a level course bit-rate saving
0:03:33and the the idea is simple a a a a lot to be pretty the for by four block is
0:03:38divided into a
0:03:39for some some and that that meeting
0:03:41just uh
0:03:43a a a a a a light on this sub looks
0:03:45you know that the the the production of
0:03:47for for problem
0:03:49i don't using
0:03:50and some
0:03:51multiple but there's that everything them using a a a a a lot that like of that but it's uh
0:03:56a result up to fifteen
0:03:58it is
0:03:59fifty percent
0:03:59receiving a saving unit stuff
0:04:01for
0:04:04we simply V
0:04:06into into used to is sparse prediction of a sparse approximation based
0:04:10a algorithm
0:04:11so instead of matching mating template tries to
0:04:15combine size
0:04:16a a meta data
0:04:18uh and yeah D we calculate in coefficients
0:04:21which are okay you think that that's five three is fess up estimation already
0:04:25we selected
0:04:26i that used
0:04:27and that use the same movie
0:04:29i
0:04:30a make it is
0:04:31target
0:04:35so before going
0:04:36i the details of the formation i like to him just the it to in addition
0:04:40so so that we have
0:04:43now define a as so it's been no and and i got a should have a support C
0:04:48and uh are
0:04:49currently
0:04:50be but it is the
0:04:52me
0:04:53so i just
0:04:54these values in two
0:04:56i i uh we just like a sample values
0:04:59and that i
0:05:00we can
0:05:01uh the the vector don't at piece of C which is which was was the support region C and the
0:05:07since that it it what is that you know the order of values of the block to be pretty the
0:05:12P
0:05:13and
0:05:14and you put these two values and a piece so
0:05:20and the
0:05:21we we you have a and at i which is a matrix
0:05:25and this time the or or or the image patch
0:05:28in a in the court to search window and put into columns of this matrix
0:05:33and then be again a compact
0:05:35the this metrics into into a it's of C an A sub D which
0:05:38it's up C corresponds to you
0:05:40sparse it's but the location of the
0:05:42the support you can see and
0:05:44and the other one score is supposed to be
0:05:47the block to be predicted
0:05:51or one we have it done by to sparse prediction
0:05:54so we we have a
0:05:55we have a a a constraint approximation of support region
0:05:59you have this constraint because
0:06:01uh uh you try to approximate the template
0:06:03actually a good approximation of template uh you
0:06:07so sometimes we not the lead to a good approximation of the look to be pretty the
0:06:12so what do we do is that we we is a sparse sparse representation uh algorithm of the algorithm a
0:06:17greedy algorithm
0:06:19and
0:06:20at each iteration of dog within be to try of this sparse uh big doors
0:06:24uh
0:06:25think that the tuition and fee check if is
0:06:28uh if the if the block to be predicted to do the unknown low uh
0:06:34approximation is good do not and we right to my
0:06:37using
0:06:38a limited number of iterations in this uh uh in the
0:06:41sparse approximation but
0:06:43so in this case we we need to
0:06:46signal this
0:06:47this
0:06:47select a sparse to that which which is the optimum reconstruction of the unknown but
0:06:52B
0:06:54and that i no is just a a a a a just a by by multiplying a a corresponding matrix
0:07:00with the the selected optimum
0:07:02uh sparse
0:07:03sparse like
0:07:07so this was just the from the T to true so i would like to
0:07:11speak about an hour a non-negative mutts like to this algorithm is actually D B
0:07:16a
0:07:17it's a low rank separation of of but uh i i of data and it is a proper that
0:07:23it's it's always a naked the D
0:07:26and and the
0:07:28and it's it's very useful for four
0:07:31that yeah for physical the that for i interpretation of the results of the in it but but is an
0:07:37algorithm and
0:07:38and this which is are using this in
0:07:40a implies that action of that the mining and noise
0:07:43remove remote locations
0:07:45in other words
0:07:46that's up to that we are given a non-negative metrics
0:07:49but the matrix E
0:07:50and the
0:07:52B try to find it's
0:07:53medics factors those a and N
0:07:57and that the the usual cost function of
0:07:59and M actually the
0:08:02a it it didn't distance
0:08:04with the the constraints of the didn't the elements in the match this is R
0:08:08are non-negative always
0:08:10this is this is a well known problem and it's sold in two thousand by a by lee and and
0:08:16the it is uh uh a than at multiple multiplicative update iterations
0:08:20and starting with data
0:08:22no um
0:08:23randomized and non-negative
0:08:26a image you listen of a and T a and X and a a a at the knitting update the
0:08:30conditions
0:08:31it's true that
0:08:33the this the and it's good it the distance is decreasing or each iteration
0:08:40uh
0:08:41we can we can write this uh
0:08:44a cost function of and i in in a vector of form so that's suppose that we have a a
0:08:48vector B
0:08:49and which which needs to be a but to write a
0:08:52yeah
0:08:53i at least a a and a vector X
0:08:55oh it
0:08:56still value the yeah equations uh a real work with the with this kind of problem
0:09:01and
0:09:02are
0:09:03a i i do have here is to feed a and B be is actually
0:09:07because it's the data
0:09:08which needs to be packed
0:09:10but i be fixed at here
0:09:12so that me just remind you what was a
0:09:14a a is the T
0:09:16the text patches
0:09:17extracted from these uh this course so it's we know that
0:09:23so it's a a
0:09:25and B that they try to find that and i i've uh a representation of the support region
0:09:31and then the be approximate the unknown block with the same power right
0:09:36or
0:09:36but a more of for a for like this uh
0:09:40this iteration into a a a a a a a and so we just use a sub C and a
0:09:44subset which corresponds to the template and the dictionary for the template
0:09:48and since we we fixed T
0:09:51dictionary a C
0:09:53we have only one
0:09:54it to a a
0:09:55a i if shown that for i
0:09:59so this uh X it's start the on the initial right
0:10:02uh a non negative
0:10:03values we and it's it's rate until uh a it to the final iteration number or or or or or
0:10:08or a a condition which is that's fight by
0:10:12by i
0:10:14and uh did did the predict the values of B are used they get the it is a using D
0:10:21the vector
0:10:22vector X which is the
0:10:24the final iteration of four
0:10:27this out we didn't band the the use the
0:10:30the dictionary which is which corresponds to
0:10:32but look to be pretty
0:10:35a like show some experimental results these are the trivial result that your date
0:10:40a you to perform and and the for barbara
0:10:42for come amount of be test are algorithm it the input in can present to order on matching pursuit and
0:10:48the template matching or
0:10:50and uh you can see on the top of the nmf algorithm as the right the
0:10:55it and in terms of coding efficiency
0:11:02oh results uh uh
0:11:04for a reconstruction
0:11:06or for for of the first frame that we use that and the again here you you can see E
0:11:11D
0:11:11the the degree in the bit rate and the the increase in the P that of values
0:11:16greatly improve
0:11:19but not to take a look to prediction on it not be function
0:11:23as a as you can see the
0:11:25the prediction is is that was supported
0:11:27this is
0:11:28this is why the we B don't have any a constraint on the on the number of just to be
0:11:33used
0:11:33it you do sparse approximations we fix the be
0:11:37be value D number of but
0:11:39and and have it made one to one that is used to for prediction but in an F
0:11:43B we didn't have any
0:11:45any constraint
0:11:47so starting that this observation be just the impose a sparsity constraint on image
0:11:54and a a constant is
0:11:56just to L O K K I just K non-zero elements in the sparse vector
0:12:00and it again can keep track of these sparse vectors to what my prediction as in sparse approximation but
0:12:07and if we if a again from a like this it
0:12:10a similar to sparse approximation algorithms but prediction algorithm excess
0:12:15we have a non negativity constraint on the
0:12:17on the corporation
0:12:20and of course you data and that that do
0:12:23signal you the the value of K select
0:12:25to optimize uh of the number of by
0:12:30and the the to the prediction is a to
0:12:33i
0:12:34a the the signal was that that they can in the same manner as the as a sparse prediction matter
0:12:41so here
0:12:42really
0:12:43since we used
0:12:45that used the computational load because of the sparse the constraint we we decide to include use
0:12:51instead of using one the one template we
0:12:54we we introduce minor models
0:12:56to to select the best one as
0:12:58in
0:12:58it is to you know to compare with they stop to six for because they stuff
0:13:01two six four four by four intra
0:13:03and by modes
0:13:05so we just decided to have nine minutes and the compare with they started
0:13:08for prediction
0:13:11and and and here uh
0:13:13since the V set a well these step this we need to signal it
0:13:17as an integer value to the
0:13:21so i would like to show you region
0:13:23which is extracted from for an image
0:13:25uh and it's very low bit-rate prediction
0:13:28so you can see this is a step to six for prediction
0:13:31and sparse approximations and the sparse nmf algorithm prediction methods and
0:13:37you and you can see D the artifacts on sparse approximation on the age and the and in and uh
0:13:42there is no facts on the predicted image
0:13:46and the
0:13:48uh a image which function from of from by about uh and uh in that it takes to region and
0:13:54can clearly see improvement on the visual quality at least the it's improved by
0:14:00by this algorithm
0:14:03uh final of a are T
0:14:06i a compression compression results are which are are compared to a step to six four and
0:14:12uh and
0:14:13a a sparse approximation and them man
0:14:16for about about five four real images
0:14:19or B
0:14:20we just a to it is the sparse approximation at times and B cheap
0:14:24i J X
0:14:25uh i i'm sorry
0:14:27to as a
0:14:28for uh or nmf algorithm so it's
0:14:31just that
0:14:32a two one to eight to the number of buttons a varying from one to eight
0:14:38and is for by four block size and based prediction is selected by a a a a a function
0:14:45so are the top and the road that curve is and F and
0:14:48uh the blue blue curves are
0:14:51a corresponds once to
0:14:52a course one to
0:14:54sparse possible was approximations that we first
0:14:57course when they that for once to
0:14:59prediction modes
0:15:03so the conclusion in this work we just the introduce and
0:15:07no image prediction mid which is placed into in but at that time instant the detection method
0:15:12algorithm
0:15:13and it's the constraints it even rubs better
0:15:17and this can also be a to to image inpainting what was lost can and applications
0:15:24and there is a final remark a it can be
0:15:26this all them can be an effective alternative and the but it's compared to other metrics as like this guy
0:15:32before and this
0:15:33this presentation
0:15:36and i would like to thank you for your time and
0:15:38you have some questions
0:15:40i would we happy to
0:15:42that's
0:15:49but i have a questions
0:15:51you
0:15:55a is group sessions or just step up to the come from
0:15:59i
0:16:00have some questions or
0:16:02thank
0:16:03how is the computational cost to the other math
0:16:10uh the the computational cost compared to a step for is
0:16:14uh it's high because in the in a is that was used four
0:16:18it's
0:16:18uh the the P there's the interpolation prediction there's are defined before and just the they use these few into
0:16:25a
0:16:26in in in into the algorithm
0:16:27to to interpolate pixel value
0:16:30uh
0:16:31but that's why of them work for for texture regions and complex
0:16:34start so
0:16:36you know to based technology those so to
0:16:39two
0:16:40to them a some complex algorithms to
0:16:43uh to overcome this lacks in the
0:16:46in the image operations so
0:16:48yes
0:16:49i
0:16:50yeah you in terms of complete uh in terms of computational complex the and compare it is not to for
0:16:55it's higher than
0:16:56it's that is for but it's sparse approximation all
0:16:59uh
0:17:00it's it's as the same
0:17:02i can see
0:17:07a question
0:17:14i so when you are as your uh
0:17:17a questions for the yeah and i'm at a good sparse
0:17:20the
0:17:21or are you have you at that at the
0:17:23for a for our
0:17:25three
0:17:26oh
0:17:27a a a a that the sparse representation
0:17:30for a similar except for that you have a constraint to X
0:17:34square we equal to is you know
0:17:37but there why do you think that you met
0:17:40i or if you put the constraint
0:17:42okay okay
0:17:43question
0:17:44actually
0:17:45uh uh in sparse approximation method
0:17:48if in each iteration
0:17:49you try to approximate the template
0:17:52but it it the post iteration you find the
0:17:55the highest correlation between the
0:17:57we in template and the atoms in the dictionary
0:17:59and then you get it is usual
0:18:02and that at the second iteration you to right
0:18:04you you you process on the residual image
0:18:07it is usual of the template okay
0:18:09and
0:18:10you know in a in a in in the special domain the template and the the unknown block are correlated
0:18:15to each other but in the residual the they are not correlate
0:18:20so first of all the
0:18:21that that's why uh for example a like a quick meeting coefficients and sparse approximations
0:18:26might be very good for template
0:18:28but it might be very very uh you know uh i E
0:18:31it my
0:18:33a can it might contain a high frequencies for for the look to be pretty
0:18:37so since you you try to
0:18:40uh i could just
0:18:42edition
0:18:43just you use but was used but probably
0:18:46the patches instead of instead of using
0:18:49correlation uh correlation be a correlation coefficients which are a it on the residual domain
0:18:54and
0:18:55if you see we don't uh we don't use of the residual information in nmf algorithm be just use the
0:19:00patches which are very close to apply
0:19:05uh i i i hope is
0:19:07it's clear
0:19:09oh the steak house