| 0:00:16 | oh | 
|---|
| 0:00:16 | we can we that then i'm here to a two percent or people title | 
|---|
| 0:00:19 | in compression using iteration to and and aligned action | 
|---|
| 0:00:22 | call for supper thinking about | 
|---|
| 0:00:24 | and the to jack my kids you | 
|---|
| 0:00:28 | yeah so that the right my talk to the three parts the first part | 
|---|
| 0:00:31 | for of sparse presentations | 
|---|
| 0:00:33 | yeah and motivate how they can be used for image compression | 
|---|
| 0:00:37 | and some the issues have come up in this scenario | 
|---|
| 0:00:39 | and i trust and the second | 
|---|
| 0:00:41 | we we present a present or coverage | 
|---|
| 0:00:44 | yeah and then | 
|---|
| 0:00:45 | and sorry our contributions | 
|---|
| 0:00:46 | and the present results | 
|---|
| 0:00:47 | the for | 
|---|
| 0:00:49 | sparse person | 
|---|
| 0:00:51 | yeah well | 
|---|
| 0:00:52 | or a signal vector Y | 
|---|
| 0:00:54 | are also given a dictionary matrix E | 
|---|
| 0:00:56 | which is a complete unique that has more calls than it has | 
|---|
| 0:01:00 | row support and or signal dimension | 
|---|
| 0:01:03 | yeah | 
|---|
| 0:01:04 | and then | 
|---|
| 0:01:06 | you also have a | 
|---|
| 0:01:10 | yeah | 
|---|
| 0:01:10 | so this is a signal vector Y | 
|---|
| 0:01:12 | that's the dictionary a matrix T and this vector X | 
|---|
| 0:01:15 | is a sparse representation | 
|---|
| 0:01:16 | and what it does this | 
|---|
| 0:01:18 | a set like so that's a few columns with the channel matrix D | 
|---|
| 0:01:21 | and a waste and to construe | 
|---|
| 0:01:23 | to construct an approximation of signal vector Y that's a summation is which shown to be and a vector or | 
|---|
| 0:01:29 | so the aim is to use as few courses possible this dictionary matrix | 
|---|
| 0:01:33 | and obtain nonetheless a good approximation of Y | 
|---|
| 0:01:35 | so the way that one can construct this vector X | 
|---|
| 0:01:38 | there's quite a few ways where we use in our work score | 
|---|
| 0:01:41 | the matching pursuit algorithm | 
|---|
| 0:01:43 | yeah networks like so | 
|---|
| 0:01:45 | we initialize the residual vector Y | 
|---|
| 0:01:47 | and then the first | 
|---|
| 0:01:49 | yeah | 
|---|
| 0:01:50 | step of iteration | 
|---|
| 0:01:51 | we choose | 
|---|
| 0:01:52 | yeah call | 
|---|
| 0:01:54 | a from the dictionary one that's most correlated to are vector | 
|---|
| 0:01:56 | then | 
|---|
| 0:01:57 | we set the coefficient | 
|---|
| 0:01:59 | to the projection of the rest of that | 
|---|
| 0:02:01 | a call and then we check the condition if we have enough of that as to me X it otherwise | 
|---|
| 0:02:05 | we remove the contribution of the new atom | 
|---|
| 0:02:07 | to give system residual | 
|---|
| 0:02:09 | i didn't the back | 
|---|
| 0:02:10 | choose another at of another coefficient | 
|---|
| 0:02:11 | and so so this is the matching pursuit algorithm used | 
|---|
| 0:02:14 | and | 
|---|
| 0:02:16 | and then once we have a vector X how do we use it | 
|---|
| 0:02:18 | in image compression there's | 
|---|
| 0:02:21 | are ways in which is or don't | 
|---|
| 0:02:22 | in the literature | 
|---|
| 0:02:23 | this is way we do it | 
|---|
| 0:02:25 | a which is more the standard we just take | 
|---|
| 0:02:27 | yeah | 
|---|
| 0:02:29 | but something and each of them use | 
|---|
| 0:02:31 | one block | 
|---|
| 0:02:32 | a a to be the signal vector Y | 
|---|
| 0:02:34 | yeah and this the sparse approximation X so this is sparse vector X which is that | 
|---|
| 0:02:38 | combat | 
|---|
| 0:02:39 | representation of the signal vector Y | 
|---|
| 0:02:41 | this is the approach we use | 
|---|
| 0:02:42 | and the decide which is to come up here | 
|---|
| 0:02:44 | the first one is | 
|---|
| 0:02:47 | which dictionary D we use | 
|---|
| 0:02:49 | yeah | 
|---|
| 0:02:50 | and then the | 
|---|
| 0:02:51 | are are solution here is to use a tell which is a new their structure dictionary | 
|---|
| 0:02:55 | yeah i i've the duration to like dictionary | 
|---|
| 0:02:58 | so that's the first sign we should seconds issue issues | 
|---|
| 0:03:01 | hi we choose the sparsity of the blocks web image the hold the we choose how many atoms | 
|---|
| 0:03:05 | we used to represent each one of this block | 
|---|
| 0:03:08 | that gonna process for something the new approach | 
|---|
| 0:03:10 | just a little | 
|---|
| 0:03:11 | rate distortion this criterion | 
|---|
| 0:03:13 | to | 
|---|
| 0:03:13 | distribute atoms at the image | 
|---|
| 0:03:15 | and the method is we should | 
|---|
| 0:03:17 | well as we have the spectra X | 
|---|
| 0:03:18 | for each block | 
|---|
| 0:03:20 | then how do we construct a bit stream from from that | 
|---|
| 0:03:23 | and the were just gonna use standard approach just to that used on you know from a decision of the | 
|---|
| 0:03:26 | coefficients have an encoding | 
|---|
| 0:03:28 | a fixed and code | 
|---|
| 0:03:29 | yeah for the | 
|---|
| 0:03:31 | so then the next | 
|---|
| 0:03:32 | part of my presentation one | 
|---|
| 0:03:34 | is going to address this to decide issues | 
|---|
| 0:03:37 | the the choice | 
|---|
| 0:03:38 | and that the distribution | 
|---|
| 0:03:39 | H | 
|---|
| 0:03:40 | that's speak an addiction choice | 
|---|
| 0:03:42 | yeah so just do want to date | 
|---|
| 0:03:44 | the dictionary structure that we propose | 
|---|
| 0:03:46 | yeah we drawn here | 
|---|
| 0:03:48 | yeah the sparse approximation creation | 
|---|
| 0:03:50 | and this is a dictionary D which is vector | 
|---|
| 0:03:52 | yeah | 
|---|
| 0:03:53 | a fast matrix it has more columns and or signal dimensions | 
|---|
| 0:03:57 | yeah | 
|---|
| 0:03:58 | so that | 
|---|
| 0:03:59 | but it could be that since interesting "'cause" that's what we it's the sparsity of the vector X | 
|---|
| 0:04:03 | and that's what we want to one a sparse | 
|---|
| 0:04:05 | vector X | 
|---|
| 0:04:06 | yeah and then | 
|---|
| 0:04:08 | the them of a complete D is | 
|---|
| 0:04:12 | you | 
|---|
| 0:04:19 | the map and D S | 
|---|
| 0:04:21 | yeah that | 
|---|
| 0:04:22 | the more computationally expensive it is to find the best | 
|---|
| 0:04:25 | you i | 
|---|
| 0:04:26 | the represent | 
|---|
| 0:04:27 | the signal vector Y well | 
|---|
| 0:04:29 | is still | 
|---|
| 0:04:30 | the second issue here | 
|---|
| 0:04:32 | and at that point is | 
|---|
| 0:04:33 | well them more absence we have a | 
|---|
| 0:04:35 | then the more expensive it is in terms of coding rate | 
|---|
| 0:04:38 | two | 
|---|
| 0:04:39 | yeah yeah to to | 
|---|
| 0:04:40 | transmit | 
|---|
| 0:04:41 | i | 
|---|
| 0:04:42 | so that that that the fires of the atoms used that as an issue | 
|---|
| 0:04:45 | so for complete mess | 
|---|
| 0:04:46 | but is the sparsity but it also | 
|---|
| 0:04:48 | also increases the complexity of the decoding system | 
|---|
| 0:04:51 | and the coding rate | 
|---|
| 0:04:53 | so what we're going to do is we're going to structure | 
|---|
| 0:04:55 | the dictionary matrix T meaning that we're going to constrain and the way in which groups of atoms can be | 
|---|
| 0:05:00 | selected | 
|---|
| 0:05:01 | yeah | 
|---|
| 0:05:02 | so this is the the motivation to | 
|---|
| 0:05:05 | the high and duration two | 
|---|
| 0:05:06 | and a like dictionary that this constraint | 
|---|
| 0:05:08 | are are going to a allow was to enjoy | 
|---|
| 0:05:10 | to do over complete and the sparsity of the loop | 
|---|
| 0:05:13 | uses | 
|---|
| 0:05:14 | E | 
|---|
| 0:05:15 | less the constraint or without going to | 
|---|
| 0:05:17 | penalty in terms of | 
|---|
| 0:05:19 | a compact | 
|---|
| 0:05:20 | and coding rate | 
|---|
| 0:05:22 | but just a game | 
|---|
| 0:05:24 | i i i | 
|---|
| 0:05:26 | so what iteration to here | 
|---|
| 0:05:28 | and | 
|---|
| 0:05:28 | right | 
|---|
| 0:05:30 | to to illustrate that i just draw | 
|---|
| 0:05:32 | the matching pursuit | 
|---|
| 0:05:34 | yeah | 
|---|
| 0:05:35 | block diagram for of two slides back | 
|---|
| 0:05:37 | is the jury matrix D | 
|---|
| 0:05:39 | yeah | 
|---|
| 0:05:40 | which is constant | 
|---|
| 0:05:41 | for of the durations | 
|---|
| 0:05:43 | for the standard case | 
|---|
| 0:05:44 | now in our case and i three to in case | 
|---|
| 0:05:46 | what we do is we make this matrix D a function of the iteration | 
|---|
| 0:05:50 | like so | 
|---|
| 0:05:51 | no for with | 
|---|
| 0:05:53 | that that's what we call it tuition to | 
|---|
| 0:05:54 | because the chance of intuition | 
|---|
| 0:05:57 | both | 
|---|
| 0:05:57 | B | 
|---|
| 0:05:58 | and a | 
|---|
| 0:05:59 | which is the i have the same number of atoms and | 
|---|
| 0:06:03 | then | 
|---|
| 0:06:05 | well | 
|---|
| 0:06:05 | the i T king | 
|---|
| 0:06:07 | iteration iteration scheme | 
|---|
| 0:06:08 | yeah it | 
|---|
| 0:06:09 | more of a complete right because we have a lot more i | 
|---|
| 0:06:12 | i i to choose from | 
|---|
| 0:06:14 | but at the same time | 
|---|
| 0:06:15 | the complexity | 
|---|
| 0:06:17 | heard | 
|---|
| 0:06:17 | and select "'em" | 
|---|
| 0:06:18 | and that in this block | 
|---|
| 0:06:20 | the same because we have a as columns | 
|---|
| 0:06:23 | a when we use the would be back here | 
|---|
| 0:06:25 | yeah | 
|---|
| 0:06:26 | so we have a problem compared under matching pursuit | 
|---|
| 0:06:29 | and also a proper coding rate we use | 
|---|
| 0:06:31 | fixed | 
|---|
| 0:06:32 | then | 
|---|
| 0:06:32 | code to encode | 
|---|
| 0:06:34 | yeah to in this just the coding rate is was going to be little to of and | 
|---|
| 0:06:37 | so this is structuring approach | 
|---|
| 0:06:39 | E allows us to enjoy over complete is | 
|---|
| 0:06:41 | we control | 
|---|
| 0:06:42 | complexity and coding rate | 
|---|
| 0:06:48 | i just | 
|---|
| 0:06:48 | drawn here | 
|---|
| 0:06:50 | yeah | 
|---|
| 0:06:52 | the majors is yeah i i a we're structure so this is the iteration to structure right | 
|---|
| 0:06:57 | i where are is the matrix D i | 
|---|
| 0:07:00 | yeah yeah | 
|---|
| 0:07:01 | and the recording train this structure | 
|---|
| 0:07:03 | and the training scheme is very simple we use a top-down approach | 
|---|
| 0:07:06 | i so we assume we have a large set of | 
|---|
| 0:07:08 | training vectors Y and use all strain vectors to train | 
|---|
| 0:07:12 | the first layer | 
|---|
| 0:07:13 | the one | 
|---|
| 0:07:14 | and then once with trained one fixed it and we compute | 
|---|
| 0:07:17 | the rest use the output of the first layer so we have the rest used for the try training set | 
|---|
| 0:07:20 | that used to train a second there | 
|---|
| 0:07:22 | and so that are that to the last | 
|---|
| 0:07:27 | so this is | 
|---|
| 0:07:29 | yeah | 
|---|
| 0:07:30 | not taken i layer | 
|---|
| 0:07:32 | of the i T structure at the last flight | 
|---|
| 0:07:34 | so that's that here | 
|---|
| 0:07:36 | the input that in progress you and they are dress you | 
|---|
| 0:07:39 | i know i | 
|---|
| 0:07:40 | i'm going to explore geometric we what happens when as | 
|---|
| 0:07:43 | you want of two atoms of this way | 
|---|
| 0:07:45 | so here are | 
|---|
| 0:07:46 | the input was that use of this the class to just use this great out here | 
|---|
| 0:07:50 | and then this subspace it like the screen it here | 
|---|
| 0:07:54 | is the i was just pose | 
|---|
| 0:07:56 | of the screen | 
|---|
| 0:07:57 | so as you can see | 
|---|
| 0:07:59 | in that there is a reduction of dimensionality | 
|---|
| 0:08:01 | between the one that was use uh i mean one | 
|---|
| 0:08:03 | and i rest used for i | 
|---|
| 0:08:06 | i rest rest of space this | 
|---|
| 0:08:08 | well let's dimensionality mention of that must respect | 
|---|
| 0:08:10 | and that was for the but i am here the but what the red | 
|---|
| 0:08:13 | reduces dimensionality by one | 
|---|
| 0:08:15 | for X | 
|---|
| 0:08:16 | in progress | 
|---|
| 0:08:17 | the problem is that | 
|---|
| 0:08:20 | yeah the union of this two | 
|---|
| 0:08:22 | rest of sub-spaces | 
|---|
| 0:08:24 | none of us that's entire | 
|---|
| 0:08:25 | yeah | 
|---|
| 0:08:26 | original signal space | 
|---|
| 0:08:28 | so this is a of | 
|---|
| 0:08:29 | as this means that the next | 
|---|
| 0:08:31 | the | 
|---|
| 0:08:32 | the i that's one from the next layer | 
|---|
| 0:08:33 | it's going to a have to address the entire signal space | 
|---|
| 0:08:37 | so this is what to date | 
|---|
| 0:08:38 | yeah why of an operation we propose which works like so | 
|---|
| 0:08:42 | yeah so no each | 
|---|
| 0:08:45 | some | 
|---|
| 0:08:45 | house | 
|---|
| 0:08:46 | and alignment | 
|---|
| 0:08:47 | matrix | 
|---|
| 0:08:48 | yeah and this | 
|---|
| 0:08:49 | a of takes | 
|---|
| 0:08:51 | for example the green at them | 
|---|
| 0:08:52 | and all items | 
|---|
| 0:08:53 | with the vertical axis | 
|---|
| 0:08:55 | and this score three example | 
|---|
| 0:08:57 | and it takes | 
|---|
| 0:08:58 | i | 
|---|
| 0:08:58 | rested know | 
|---|
| 0:08:59 | space of this i | 
|---|
| 0:09:01 | i also | 
|---|
| 0:09:01 | with | 
|---|
| 0:09:02 | the horizontal something | 
|---|
| 0:09:04 | and does the same thing that the but at the and are rest of space | 
|---|
| 0:09:07 | they but of is again going to file | 
|---|
| 0:09:09 | oh the for simple thing so able two | 
|---|
| 0:09:11 | of sub-spaces coincide | 
|---|
| 0:09:13 | and they're right on the | 
|---|
| 0:09:14 | or something | 
|---|
| 0:09:15 | meaning that i i was of space | 
|---|
| 0:09:17 | using this | 
|---|
| 0:09:18 | we rotations still | 
|---|
| 0:09:20 | yeah | 
|---|
| 0:09:21 | i get get and joyce can just dimensional | 
|---|
| 0:09:25 | that we have about T in choosing | 
|---|
| 0:09:28 | it is | 
|---|
| 0:09:28 | rotation a she's is that i | 
|---|
| 0:09:30 | i was vertical axis and | 
|---|
| 0:09:33 | i was of is with | 
|---|
| 0:09:34 | the for pretty so we further change shoes | 
|---|
| 0:09:36 | a rotation | 
|---|
| 0:09:37 | majors as or are a lot of interest is | 
|---|
| 0:09:39 | so that they are also for | 
|---|
| 0:09:42 | i rest of sub-spaces | 
|---|
| 0:09:44 | to | 
|---|
| 0:09:45 | yeah i have | 
|---|
| 0:09:46 | principal component | 
|---|
| 0:09:48 | that i was alright right | 
|---|
| 0:09:50 | in this for some so | 
|---|
| 0:09:51 | the first principal component | 
|---|
| 0:09:53 | of the red | 
|---|
| 0:09:54 | so space is going to follow along this axis | 
|---|
| 0:09:56 | a like was the first principal component | 
|---|
| 0:09:58 | of of the screen subspace is going to four | 
|---|
| 0:10:00 | a a lot of this | 
|---|
| 0:10:01 | as | 
|---|
| 0:10:01 | and so one for the | 
|---|
| 0:10:03 | a | 
|---|
| 0:10:05 | yeah | 
|---|
| 0:10:06 | so now i'm just going to read | 
|---|
| 0:10:08 | are are are are are | 
|---|
| 0:10:09 | previous i i seen this modification | 
|---|
| 0:10:11 | this an interest | 
|---|
| 0:10:12 | but occasions | 
|---|
| 0:10:13 | and that's what i have a year | 
|---|
| 0:10:15 | so this is my and to a she two and one dictionary | 
|---|
| 0:10:18 | and as you can see no i have a | 
|---|
| 0:10:20 | well alignment i tricks per at | 
|---|
| 0:10:22 | yeah and because | 
|---|
| 0:10:24 | i i went information | 
|---|
| 0:10:25 | then | 
|---|
| 0:10:28 | but atoms | 
|---|
| 0:10:29 | of the matrix with the where with this | 
|---|
| 0:10:31 | at this matrix here | 
|---|
| 0:10:32 | existing a so must also produce dimensionality | 
|---|
| 0:10:35 | the change my as i just what i estimator and | 
|---|
| 0:10:40 | so this is a are | 
|---|
| 0:10:41 | solution to the first sign we should what which was which to charge | 
|---|
| 0:10:45 | this is a each way to use because it enjoys over a complete | 
|---|
| 0:10:48 | we do so that C yeah in control coding rate | 
|---|
| 0:10:54 | now the second is issue | 
|---|
| 0:10:56 | well as at the distribution of process | 
|---|
| 0:10:58 | the image | 
|---|
| 0:10:59 | here we also have a | 
|---|
| 0:11:01 | yeah | 
|---|
| 0:11:02 | contribution in this paper | 
|---|
| 0:11:04 | a a of that the standard approach used to | 
|---|
| 0:11:06 | so a are specified the number of atoms the number of those here is | 
|---|
| 0:11:10 | yeah yeah | 
|---|
| 0:11:12 | this is the sparse approximation | 
|---|
| 0:11:14 | of the input signal vector Y | 
|---|
| 0:11:16 | at the standard approach is us to apply a | 
|---|
| 0:11:19 | and or | 
|---|
| 0:11:20 | threshold | 
|---|
| 0:11:21 | to this approximation to are so we choose | 
|---|
| 0:11:22 | this this was over at times that satisfy some at maximum or | 
|---|
| 0:11:26 | that's a standard approach | 
|---|
| 0:11:28 | you are the problem is that we have | 
|---|
| 0:11:30 | B blocks | 
|---|
| 0:11:31 | in the image | 
|---|
| 0:11:32 | and we want to choose the sparsity L and | 
|---|
| 0:11:34 | each one of this blocks Y and | 
|---|
| 0:11:36 | so we we | 
|---|
| 0:11:37 | right | 
|---|
| 0:11:39 | a a a a a a global optimal | 
|---|
| 0:11:41 | yeah sparse functions | 
|---|
| 0:11:42 | approach like so | 
|---|
| 0:11:43 | so we want to choose a sparse sparse is of all the routes | 
|---|
| 0:11:46 | so that they can do | 
|---|
| 0:11:47 | a can look at it | 
|---|
| 0:11:49 | yeah block representation a | 
|---|
| 0:11:51 | is minimize subject to a constraint | 
|---|
| 0:11:53 | on the can be but the root of a box | 
|---|
| 0:11:56 | oh this is not very good | 
|---|
| 0:11:58 | so | 
|---|
| 0:11:58 | so we propose | 
|---|
| 0:11:59 | yeah yeah an approximate | 
|---|
| 0:12:01 | scheme | 
|---|
| 0:12:02 | which works like so | 
|---|
| 0:12:04 | yeah we first initialize a sparse is as are also said well of one to zero | 
|---|
| 0:12:08 | and then choose the block | 
|---|
| 0:12:11 | the second step that of course | 
|---|
| 0:12:12 | the biggest problem | 
|---|
| 0:12:13 | in terms of arrival | 
|---|
| 0:12:15 | distortion reduction it | 
|---|
| 0:12:17 | so this is | 
|---|
| 0:12:18 | a a this here is the distortion | 
|---|
| 0:12:20 | but we used an | 
|---|
| 0:12:22 | think occurred sparsity a | 
|---|
| 0:12:23 | and this is the | 
|---|
| 0:12:25 | potential distortion if we add one more at | 
|---|
| 0:12:28 | to twelve summation in two | 
|---|
| 0:12:30 | so this is the you or the reduction rather in distortion | 
|---|
| 0:12:33 | and that this is the | 
|---|
| 0:12:34 | called a penalty | 
|---|
| 0:12:36 | include | 
|---|
| 0:12:37 | i i i and this one i here | 
|---|
| 0:12:39 | so this is distortion | 
|---|
| 0:12:41 | for | 
|---|
| 0:12:42 | yeah | 
|---|
| 0:12:43 | gain a distortion reduction distortion of bit | 
|---|
| 0:12:45 | and this is the power of | 
|---|
| 0:12:47 | the because problems that true | 
|---|
| 0:12:49 | and those it turns | 
|---|
| 0:12:50 | so we just a a one more at to this choice and | 
|---|
| 0:12:53 | well | 
|---|
| 0:12:54 | i i its sparsity P and the P | 
|---|
| 0:12:56 | scheme for the second step just a block | 
|---|
| 0:12:58 | yeah i i didn't and you add to the choice weapon and so one all still | 
|---|
| 0:13:03 | the but but just for the image is one | 
|---|
| 0:13:06 | so that's | 
|---|
| 0:13:08 | that was a second | 
|---|
| 0:13:09 | the site issue | 
|---|
| 0:13:10 | and now have some to percent | 
|---|
| 0:13:12 | yeah yeah for of but that's that the using is as follows | 
|---|
| 0:13:15 | yeah we use the product that the set which is is that a set of non homogeneous face images | 
|---|
| 0:13:19 | so the right conditions of the poses not controlled | 
|---|
| 0:13:23 | in as we take a training set of the | 
|---|
| 0:13:25 | a four hundred | 
|---|
| 0:13:26 | images | 
|---|
| 0:13:27 | i that | 
|---|
| 0:13:27 | training and just and that i a test set a hundred images of this or the showing just right | 
|---|
| 0:13:32 | so we use this training set to train | 
|---|
| 0:13:34 | i i type structure | 
|---|
| 0:13:35 | iteration to align dictionary structure | 
|---|
| 0:13:38 | yeah and then test | 
|---|
| 0:13:40 | use this | 
|---|
| 0:13:41 | yeah this test | 
|---|
| 0:13:43 | okay | 
|---|
| 0:13:43 | so | 
|---|
| 0:13:44 | so just examples of for it | 
|---|
| 0:13:46 | the | 
|---|
| 0:13:51 | so here are we distortion or cells | 
|---|
| 0:13:53 | i have a | 
|---|
| 0:13:54 | i a first of all this curves | 
|---|
| 0:13:56 | are | 
|---|
| 0:13:57 | E i was or a one hundred test image | 
|---|
| 0:14:00 | so is it's a two thousand but here that the sort | 
|---|
| 0:14:03 | right now i | 
|---|
| 0:14:04 | this is true in last | 
|---|
| 0:14:06 | B | 
|---|
| 0:14:07 | i then i have a three | 
|---|
| 0:14:08 | curves for i | 
|---|
| 0:14:09 | yeah this is | 
|---|
| 0:14:11 | this but curve as with lots of size times can | 
|---|
| 0:14:13 | the green of about size twelve and twelve | 
|---|
| 0:14:16 | and this one | 
|---|
| 0:14:16 | for a of size sixteen ten sixteen | 
|---|
| 0:14:18 | so as you can see i to of is quite claim | 
|---|
| 0:14:21 | yeah | 
|---|
| 0:14:22 | is not and all rates | 
|---|
| 0:14:25 | even greater than for nice | 
|---|
| 0:14:27 | and than at highest rates | 
|---|
| 0:14:29 | this is | 
|---|
| 0:14:29 | still at of point nine | 
|---|
| 0:14:32 | yeah | 
|---|
| 0:14:33 | so | 
|---|
| 0:14:34 | the just one out | 
|---|
| 0:14:35 | that the coding scheme used to encode | 
|---|
| 0:14:37 | the sparse vector X is present | 
|---|
| 0:14:39 | so was | 
|---|
| 0:14:40 | and | 
|---|
| 0:14:41 | yeah in in there rate distortion | 
|---|
| 0:14:43 | yeah are work at that | 
|---|
| 0:14:45 | transform that we use | 
|---|
| 0:14:46 | yeah and the are | 
|---|
| 0:14:47 | oh of the of the proposed to | 
|---|
| 0:14:50 | yeah | 
|---|
| 0:14:51 | at the location scheme | 
|---|
| 0:14:53 | okay so now i also have some | 
|---|
| 0:14:56 | i only to the results | 
|---|
| 0:14:58 | slight have a a two images here | 
|---|
| 0:15:00 | that i code using the a two thousand | 
|---|
| 0:15:02 | yeah | 
|---|
| 0:15:03 | and i | 
|---|
| 0:15:04 | as you can see | 
|---|
| 0:15:05 | because for use and i are better than of that can you | 
|---|
| 0:15:10 | also | 
|---|
| 0:15:11 | concluding remarks i started by | 
|---|
| 0:15:13 | a | 
|---|
| 0:15:14 | summarizing a really let's possible since the presentation are and how we can use the | 
|---|
| 0:15:18 | yeah in image compression | 
|---|
| 0:15:21 | and then | 
|---|
| 0:15:22 | in doing so we ran to three decide issues | 
|---|
| 0:15:24 | the first one was what transformation | 
|---|
| 0:15:26 | we applied to the signal | 
|---|
| 0:15:28 | or to the image blocks well | 
|---|
| 0:15:30 | what dictionary use | 
|---|
| 0:15:31 | there we propose using | 
|---|
| 0:15:33 | and new dictionary structure | 
|---|
| 0:15:34 | the i that's true | 
|---|
| 0:15:36 | yeah and then there was a question of how do we | 
|---|
| 0:15:38 | i i atoms across the image | 
|---|
| 0:15:40 | yeah | 
|---|
| 0:15:41 | and there are proposed new | 
|---|
| 0:15:43 | gram with distortion this approach | 
|---|
| 0:15:45 | and then | 
|---|
| 0:15:46 | in terms of | 
|---|
| 0:15:47 | and and the are X just use very standard approaches | 
|---|
| 0:15:51 | so there was nothing there | 
|---|
| 0:15:52 | yeah yeah but the best results | 
|---|
| 0:15:54 | yeah we're we're good | 
|---|
| 0:15:55 | i | 
|---|
| 0:15:56 | a from a given to of house | 
|---|
| 0:15:58 | yes is was only for the cost features | 
|---|
| 0:16:00 | yeah | 
|---|
| 0:16:01 | i thank you very much tension | 
|---|
| 0:16:03 | you have any questions or that that's | 
|---|
| 0:16:10 | i i question | 
|---|
| 0:16:13 | but | 
|---|
| 0:16:21 | do you can put but it exactly a scheme | 
|---|
| 0:16:24 | and so i'm how have compared the | 
|---|
| 0:16:26 | okay | 
|---|
| 0:16:27 | yeah | 
|---|
| 0:16:28 | so there's | 
|---|
| 0:16:29 | there's a few things that come to play here | 
|---|
| 0:16:31 | so the vector X | 
|---|
| 0:16:32 | we have to specify in terms of a i in this is | 
|---|
| 0:16:35 | and one does coefficients | 
|---|
| 0:16:37 | so for that the since we use the fixed month code | 
|---|
| 0:16:39 | it's just going to be a a range up to | 
|---|
| 0:16:41 | of the number of that | 
|---|
| 0:16:43 | and and for the coefficients week while custom assuming the quantizer | 
|---|
| 0:16:46 | then we use a huffman code from that | 
|---|
| 0:16:51 | by a special property of the gain of the coefficients because many you think can be most likely that the | 
|---|
| 0:16:56 | value of the complete and you give the exponent at no is and that's | 
|---|
| 0:17:00 | but does that something of multiple red | 
|---|
| 0:17:03 | thanks | 
|---|
| 0:17:06 | one more question right you | 
|---|
| 0:17:10 | um | 
|---|
| 0:17:12 | so recorded sessions so we need a microphone | 
|---|
| 0:17:18 | a close to encode addiction on yeah right | 
|---|
| 0:17:21 | no we we make the assumption that lectures available | 
|---|
| 0:17:24 | at the decoder | 
|---|
| 0:17:25 | or | 
|---|
| 0:17:28 | okay let's think a speaker | 
|---|