0:00:14 | thank you |
---|---|

0:00:15 | i |

0:00:15 | and the money |

0:00:17 | uh my name is that |

0:00:19 | can and and i like to come to my is should issue |

0:00:22 | days |

0:00:22 | section |

0:00:23 | they do not think my |

0:00:26 | uh i'll start the |

0:00:29 | a a a a a you this description of the problem that we have in this work |

0:00:33 | that i |

0:00:34 | i the might do some pride my one prediction |

0:00:36 | using |

0:00:37 | uh uh have they makes based on its and |

0:00:40 | sparse |

0:00:40 | estimation based methods |

0:00:42 | prediction |

0:00:43 | yeah know |

0:00:45 | you to you are new approach |

0:00:47 | based on a naked my |

0:00:48 | they should next |

0:00:50 | yeah |

0:00:50 | direct application of |

0:00:52 | you |

0:00:52 | and |

0:00:53 | so that no matter |

0:00:54 | to |

0:00:55 | which pretty |

0:00:56 | i was to some experimental results that i |

0:00:59 | finish my and my presentation that we've come |

0:01:03 | so in this part we actually at the problem or for or close to image prediction |

0:01:09 | so when we talk about a close to image prediction maybe be of the art still is that |

0:01:14 | for inter prediction models |

0:01:15 | so they |

0:01:17 | uh this uh this prediction |

0:01:18 | middle |

0:01:20 | is actually |

0:01:21 | uh |

0:01:23 | that are in a home in is region in a an image or or or or in |

0:01:27 | i |

0:01:28 | it also sometimes what cost to as the orientation of the models |

0:01:33 | a a like you want to |

0:01:36 | so |

0:01:37 | a also is okay |

0:01:39 | the the interpolation fails |

0:01:41 | mostly in in all complex |

0:01:43 | we use and the strike |

0:01:46 | so you know that |

0:01:47 | can |

0:01:48 | this kind of a that search is there were lots of that which and based algorithms and inc |

0:01:53 | as an additional mode in a stuff |

0:01:55 | for |

0:01:56 | uh or or or even sparse approximation |

0:01:59 | based can be used as a generalization of the that made you know |

0:02:05 | a to giving this brief information my like to remind you based on |

0:02:09 | for intra prediction models |

0:02:11 | uh uh is you may or it did not that there are two uh two that of the prediction system |

0:02:17 | by sixty L four by four and sixteen by sixty has for prediction models |

0:02:22 | and |

0:02:23 | for by four had my |

0:02:24 | models |

0:02:25 | including yeah one P C and racial |

0:02:28 | i did a simple is just the way that place or simply to it's the piece of values |

0:02:33 | which are already included |

0:02:35 | uh |

0:02:36 | a a big one be |

0:02:38 | yeah in this but a space maybe paper |

0:02:44 | and |

0:02:45 | and we talk about that reflecting is also a very well known a simple algorithm you know uh people use |

0:02:51 | the define that play very |

0:02:53 | a cool |

0:02:53 | and a close to B C of the target well |

0:02:57 | and then and like to find is uh |

0:02:59 | a in a coarse search we know and uh |

0:03:02 | and that the the minimum distance between template that K |

0:03:06 | yeah i a and into the |

0:03:08 | allows us to to to the setting E |

0:03:11 | you can they work that is |

0:03:12 | just copy that they too |

0:03:15 | to the piece of but using target little be pretty |

0:03:19 | as an example i would like to show you |

0:03:22 | the time uh house middle |

0:03:24 | yeah |

0:03:25 | this is just a |

0:03:26 | is that an additional more in a stuff to six for |

0:03:29 | and uh result in up to a level course bit-rate saving |

0:03:33 | and the the idea is simple a a a a lot to be pretty the for by four block is |

0:03:38 | divided into a |

0:03:39 | for some some and that that meeting |

0:03:41 | just uh |

0:03:43 | a a a a a a light on this sub looks |

0:03:45 | you know that the the the production of |

0:03:47 | for for problem |

0:03:49 | i don't using |

0:03:50 | and some |

0:03:51 | multiple but there's that everything them using a a a a a lot that like of that but it's uh |

0:03:56 | a result up to fifteen |

0:03:58 | it is |

0:03:59 | fifty percent |

0:03:59 | receiving a saving unit stuff |

0:04:01 | for |

0:04:04 | we simply V |

0:04:06 | into into used to is sparse prediction of a sparse approximation based |

0:04:10 | a algorithm |

0:04:11 | so instead of matching mating template tries to |

0:04:15 | combine size |

0:04:16 | a a meta data |

0:04:18 | uh and yeah D we calculate in coefficients |

0:04:21 | which are okay you think that that's five three is fess up estimation already |

0:04:25 | we selected |

0:04:26 | i that used |

0:04:27 | and that use the same movie |

0:04:29 | i |

0:04:30 | a make it is |

0:04:31 | target |

0:04:35 | so before going |

0:04:36 | i the details of the formation i like to him just the it to in addition |

0:04:40 | so so that we have |

0:04:43 | now define a as so it's been no and and i got a should have a support C |

0:04:48 | and uh are |

0:04:49 | currently |

0:04:50 | be but it is the |

0:04:52 | me |

0:04:53 | so i just |

0:04:54 | these values in two |

0:04:56 | i i uh we just like a sample values |

0:04:59 | and that i |

0:05:00 | we can |

0:05:01 | uh the the vector don't at piece of C which is which was was the support region C and the |

0:05:07 | since that it it what is that you know the order of values of the block to be pretty the |

0:05:12 | P |

0:05:13 | and |

0:05:14 | and you put these two values and a piece so |

0:05:20 | and the |

0:05:21 | we we you have a and at i which is a matrix |

0:05:25 | and this time the or or or the image patch |

0:05:28 | in a in the court to search window and put into columns of this matrix |

0:05:33 | and then be again a compact |

0:05:35 | the this metrics into into a it's of C an A sub D which |

0:05:38 | it's up C corresponds to you |

0:05:40 | sparse it's but the location of the |

0:05:42 | the support you can see and |

0:05:44 | and the other one score is supposed to be |

0:05:47 | the block to be predicted |

0:05:51 | or one we have it done by to sparse prediction |

0:05:54 | so we we have a |

0:05:55 | we have a a a constraint approximation of support region |

0:05:59 | you have this constraint because |

0:06:01 | uh uh you try to approximate the template |

0:06:03 | actually a good approximation of template uh you |

0:06:07 | so sometimes we not the lead to a good approximation of the look to be pretty the |

0:06:12 | so what do we do is that we we is a sparse sparse representation uh algorithm of the algorithm a |

0:06:17 | greedy algorithm |

0:06:19 | and |

0:06:20 | at each iteration of dog within be to try of this sparse uh big doors |

0:06:24 | uh |

0:06:25 | think that the tuition and fee check if is |

0:06:28 | uh if the if the block to be predicted to do the unknown low uh |

0:06:34 | approximation is good do not and we right to my |

0:06:37 | using |

0:06:38 | a limited number of iterations in this uh uh in the |

0:06:41 | sparse approximation but |

0:06:43 | so in this case we we need to |

0:06:46 | signal this |

0:06:47 | this |

0:06:47 | select a sparse to that which which is the optimum reconstruction of the unknown but |

0:06:52 | B |

0:06:54 | and that i no is just a a a a a just a by by multiplying a a corresponding matrix |

0:07:00 | with the the selected optimum |

0:07:02 | uh sparse |

0:07:03 | sparse like |

0:07:07 | so this was just the from the T to true so i would like to |

0:07:11 | speak about an hour a non-negative mutts like to this algorithm is actually D B |

0:07:16 | a |

0:07:17 | it's a low rank separation of of but uh i i of data and it is a proper that |

0:07:23 | it's it's always a naked the D |

0:07:26 | and and the |

0:07:28 | and it's it's very useful for four |

0:07:31 | that yeah for physical the that for i interpretation of the results of the in it but but is an |

0:07:37 | algorithm and |

0:07:38 | and this which is are using this in |

0:07:40 | a implies that action of that the mining and noise |

0:07:43 | remove remote locations |

0:07:45 | in other words |

0:07:46 | that's up to that we are given a non-negative metrics |

0:07:49 | but the matrix E |

0:07:50 | and the |

0:07:52 | B try to find it's |

0:07:53 | medics factors those a and N |

0:07:57 | and that the the usual cost function of |

0:07:59 | and M actually the |

0:08:02 | a it it didn't distance |

0:08:04 | with the the constraints of the didn't the elements in the match this is R |

0:08:08 | are non-negative always |

0:08:10 | this is this is a well known problem and it's sold in two thousand by a by lee and and |

0:08:16 | the it is uh uh a than at multiple multiplicative update iterations |

0:08:20 | and starting with data |

0:08:22 | no um |

0:08:23 | randomized and non-negative |

0:08:26 | a image you listen of a and T a and X and a a a at the knitting update the |

0:08:30 | conditions |

0:08:31 | it's true that |

0:08:33 | the this the and it's good it the distance is decreasing or each iteration |

0:08:40 | uh |

0:08:41 | we can we can write this uh |

0:08:44 | a cost function of and i in in a vector of form so that's suppose that we have a a |

0:08:48 | vector B |

0:08:49 | and which which needs to be a but to write a |

0:08:52 | yeah |

0:08:53 | i at least a a and a vector X |

0:08:55 | oh it |

0:08:56 | still value the yeah equations uh a real work with the with this kind of problem |

0:09:01 | and |

0:09:02 | are |

0:09:03 | a i i do have here is to feed a and B be is actually |

0:09:07 | because it's the data |

0:09:08 | which needs to be packed |

0:09:10 | but i be fixed at here |

0:09:12 | so that me just remind you what was a |

0:09:14 | a a is the T |

0:09:16 | the text patches |

0:09:17 | extracted from these uh this course so it's we know that |

0:09:23 | so it's a a |

0:09:25 | and B that they try to find that and i i've uh a representation of the support region |

0:09:31 | and then the be approximate the unknown block with the same power right |

0:09:36 | or |

0:09:36 | but a more of for a for like this uh |

0:09:40 | this iteration into a a a a a a a and so we just use a sub C and a |

0:09:44 | subset which corresponds to the template and the dictionary for the template |

0:09:48 | and since we we fixed T |

0:09:51 | dictionary a C |

0:09:53 | we have only one |

0:09:54 | it to a a |

0:09:55 | a i if shown that for i |

0:09:59 | so this uh X it's start the on the initial right |

0:10:02 | uh a non negative |

0:10:03 | values we and it's it's rate until uh a it to the final iteration number or or or or or |

0:10:08 | or a a condition which is that's fight by |

0:10:12 | by i |

0:10:14 | and uh did did the predict the values of B are used they get the it is a using D |

0:10:21 | the vector |

0:10:22 | vector X which is the |

0:10:24 | the final iteration of four |

0:10:27 | this out we didn't band the the use the |

0:10:30 | the dictionary which is which corresponds to |

0:10:32 | but look to be pretty |

0:10:35 | a like show some experimental results these are the trivial result that your date |

0:10:40 | a you to perform and and the for barbara |

0:10:42 | for come amount of be test are algorithm it the input in can present to order on matching pursuit and |

0:10:48 | the template matching or |

0:10:50 | and uh you can see on the top of the nmf algorithm as the right the |

0:10:55 | it and in terms of coding efficiency |

0:11:02 | oh results uh uh |

0:11:04 | for a reconstruction |

0:11:06 | or for for of the first frame that we use that and the again here you you can see E |

0:11:11 | D |

0:11:11 | the the degree in the bit rate and the the increase in the P that of values |

0:11:16 | greatly improve |

0:11:19 | but not to take a look to prediction on it not be function |

0:11:23 | as a as you can see the |

0:11:25 | the prediction is is that was supported |

0:11:27 | this is |

0:11:28 | this is why the we B don't have any a constraint on the on the number of just to be |

0:11:33 | used |

0:11:33 | it you do sparse approximations we fix the be |

0:11:37 | be value D number of but |

0:11:39 | and and have it made one to one that is used to for prediction but in an F |

0:11:43 | B we didn't have any |

0:11:45 | any constraint |

0:11:47 | so starting that this observation be just the impose a sparsity constraint on image |

0:11:54 | and a a constant is |

0:11:56 | just to L O K K I just K non-zero elements in the sparse vector |

0:12:00 | and it again can keep track of these sparse vectors to what my prediction as in sparse approximation but |

0:12:07 | and if we if a again from a like this it |

0:12:10 | a similar to sparse approximation algorithms but prediction algorithm excess |

0:12:15 | we have a non negativity constraint on the |

0:12:17 | on the corporation |

0:12:20 | and of course you data and that that do |

0:12:23 | signal you the the value of K select |

0:12:25 | to optimize uh of the number of by |

0:12:30 | and the the to the prediction is a to |

0:12:33 | i |

0:12:34 | a the the signal was that that they can in the same manner as the as a sparse prediction matter |

0:12:41 | so here |

0:12:42 | really |

0:12:43 | since we used |

0:12:45 | that used the computational load because of the sparse the constraint we we decide to include use |

0:12:51 | instead of using one the one template we |

0:12:54 | we we introduce minor models |

0:12:56 | to to select the best one as |

0:12:58 | in |

0:12:58 | it is to you know to compare with they stop to six for because they stuff |

0:13:01 | two six four four by four intra |

0:13:03 | and by modes |

0:13:05 | so we just decided to have nine minutes and the compare with they started |

0:13:08 | for prediction |

0:13:11 | and and and here uh |

0:13:13 | since the V set a well these step this we need to signal it |

0:13:17 | as an integer value to the |

0:13:21 | so i would like to show you region |

0:13:23 | which is extracted from for an image |

0:13:25 | uh and it's very low bit-rate prediction |

0:13:28 | so you can see this is a step to six for prediction |

0:13:31 | and sparse approximations and the sparse nmf algorithm prediction methods and |

0:13:37 | you and you can see D the artifacts on sparse approximation on the age and the and in and uh |

0:13:42 | there is no facts on the predicted image |

0:13:46 | and the |

0:13:48 | uh a image which function from of from by about uh and uh in that it takes to region and |

0:13:54 | can clearly see improvement on the visual quality at least the it's improved by |

0:14:00 | by this algorithm |

0:14:03 | uh final of a are T |

0:14:06 | i a compression compression results are which are are compared to a step to six four and |

0:14:12 | uh and |

0:14:13 | a a sparse approximation and them man |

0:14:16 | for about about five four real images |

0:14:19 | or B |

0:14:20 | we just a to it is the sparse approximation at times and B cheap |

0:14:24 | i J X |

0:14:25 | uh i i'm sorry |

0:14:27 | to as a |

0:14:28 | for uh or nmf algorithm so it's |

0:14:31 | just that |

0:14:32 | a two one to eight to the number of buttons a varying from one to eight |

0:14:38 | and is for by four block size and based prediction is selected by a a a a a function |

0:14:45 | so are the top and the road that curve is and F and |

0:14:48 | uh the blue blue curves are |

0:14:51 | a corresponds once to |

0:14:52 | a course one to |

0:14:54 | sparse possible was approximations that we first |

0:14:57 | course when they that for once to |

0:14:59 | prediction modes |

0:15:03 | so the conclusion in this work we just the introduce and |

0:15:07 | no image prediction mid which is placed into in but at that time instant the detection method |

0:15:12 | algorithm |

0:15:13 | and it's the constraints it even rubs better |

0:15:17 | and this can also be a to to image inpainting what was lost can and applications |

0:15:24 | and there is a final remark a it can be |

0:15:26 | this all them can be an effective alternative and the but it's compared to other metrics as like this guy |

0:15:32 | before and this |

0:15:33 | this presentation |

0:15:36 | and i would like to thank you for your time and |

0:15:38 | you have some questions |

0:15:40 | i would we happy to |

0:15:42 | that's |

0:15:49 | but i have a questions |

0:15:51 | you |

0:15:55 | a is group sessions or just step up to the come from |

0:15:59 | i |

0:16:00 | have some questions or |

0:16:02 | thank |

0:16:03 | how is the computational cost to the other math |

0:16:10 | uh the the computational cost compared to a step for is |

0:16:14 | uh it's high because in the in a is that was used four |

0:16:18 | it's |

0:16:18 | uh the the P there's the interpolation prediction there's are defined before and just the they use these few into |

0:16:25 | a |

0:16:26 | in in in into the algorithm |

0:16:27 | to to interpolate pixel value |

0:16:30 | uh |

0:16:31 | but that's why of them work for for texture regions and complex |

0:16:34 | start so |

0:16:36 | you know to based technology those so to |

0:16:39 | two |

0:16:40 | to them a some complex algorithms to |

0:16:43 | uh to overcome this lacks in the |

0:16:46 | in the image operations so |

0:16:48 | yes |

0:16:49 | i |

0:16:50 | yeah you in terms of complete uh in terms of computational complex the and compare it is not to for |

0:16:55 | it's higher than |

0:16:56 | it's that is for but it's sparse approximation all |

0:16:59 | uh |

0:17:00 | it's it's as the same |

0:17:02 | i can see |

0:17:07 | a question |

0:17:14 | i so when you are as your uh |

0:17:17 | a questions for the yeah and i'm at a good sparse |

0:17:20 | the |

0:17:21 | or are you have you at that at the |

0:17:23 | for a for our |

0:17:25 | three |

0:17:26 | oh |

0:17:27 | a a a a that the sparse representation |

0:17:30 | for a similar except for that you have a constraint to X |

0:17:34 | square we equal to is you know |

0:17:37 | but there why do you think that you met |

0:17:40 | i or if you put the constraint |

0:17:42 | okay okay |

0:17:43 | question |

0:17:44 | actually |

0:17:45 | uh uh in sparse approximation method |

0:17:48 | if in each iteration |

0:17:49 | you try to approximate the template |

0:17:52 | but it it the post iteration you find the |

0:17:55 | the highest correlation between the |

0:17:57 | we in template and the atoms in the dictionary |

0:17:59 | and then you get it is usual |

0:18:02 | and that at the second iteration you to right |

0:18:04 | you you you process on the residual image |

0:18:07 | it is usual of the template okay |

0:18:09 | and |

0:18:10 | you know in a in a in in the special domain the template and the the unknown block are correlated |

0:18:15 | to each other but in the residual the they are not correlate |

0:18:20 | so first of all the |

0:18:21 | that that's why uh for example a like a quick meeting coefficients and sparse approximations |

0:18:26 | might be very good for template |

0:18:28 | but it might be very very uh you know uh i E |

0:18:31 | it my |

0:18:33 | a can it might contain a high frequencies for for the look to be pretty |

0:18:37 | so since you you try to |

0:18:40 | uh i could just |

0:18:42 | edition |

0:18:43 | just you use but was used but probably |

0:18:46 | the patches instead of instead of using |

0:18:49 | correlation uh correlation be a correlation coefficients which are a it on the residual domain |

0:18:54 | and |

0:18:55 | if you see we don't uh we don't use of the residual information in nmf algorithm be just use the |

0:19:00 | patches which are very close to apply |

0:19:05 | uh i i i hope is |

0:19:07 | it's clear |

0:19:09 | oh the steak house |