0:00:13thank you um good afternoon
0:00:16one of the problem that we tried to fall an image processing is a a is of some data
0:00:21that have been exposed to geometric transformations
0:00:24for example we might want to reduce is such data are or or classify them in a transformation invariant like
0:00:31one of a common approach
0:00:32for um be with such problems is the use all my fault models
0:00:36so in this work we have a um concentrated on that transformation manifold
0:00:42and it at transformation manifold in that
0:00:44family a little images that are generated by a in certain set of a geometric transformations to reference better
0:00:50for example if you take the structure ten P
0:00:53we do not its transformation manifold by
0:00:56P here
0:00:57and uh so we assume that this is an an picks out much and the mind fold is also a
0:01:02subset of R and
0:01:03this case
0:01:04each each which on this the transformation manifold is a geometric a transformed version of P
0:01:10and we define just transformation by a parameter vector or that in the parameter space
0:01:16and just that uh this uh a lot that house as the type of the geometric transformation for instance uh
0:01:21it could be any combination of two D to transformations like
0:01:25rotation and translation scale change one can find also for example
0:01:32so our for
0:01:33as in this work is uh the following we assume that we in one
0:01:37a set of uh geometric metric "'cause" transform observations of a signal type like uh a five digit
0:01:45a from the observations we are trying to construct a
0:01:49it pattern transformation manifold so in part we want to find a pattern P
0:01:53such that the transformation manifold of P
0:01:56uh represents about this state that so it's like a extra fitting problem but if it and when you to
0:02:01the data instead of like
0:02:04so that
0:02:05problem is the to find a spectral P
0:02:09so um
0:02:10this kind of a framework has some meaning that applications for the modeling and the registration of the input data
0:02:16including is also possible because we will be finding the pattern P in terms of uh some parametric at so
0:02:22it's also used to called the input data and like
0:02:26and also another ad don't use that we provide an unknown not to the model for our money false
0:02:30so that we can since to sides we can generate a new data on the manifold
0:02:34and this makes it possible to compute
0:02:37exactly at distance
0:02:38bit mean it's estimates and construct a old
0:02:41so this can be a a time as some classification settings for instance if we are given that test image
0:02:45some geometric transformation if you want to class by
0:02:48we just need to compute its distance to the uh of the transformation manifold
0:02:55so for a so that all i was first uh try to form like the problem than i will describe
0:03:00a solution that of the based on computing a representative pattern P uh with the greedy out to great really
0:03:06by selecting some atoms from a parametric dictionary
0:03:10so uh here's a show the manifold is not the by a and he and each image on the manifold
0:03:15of this from by you um that P it means a
0:03:18and the pattern P and uh i applied it to from some which the on it
0:03:22and we denote are
0:03:24uh input them just by you why this R
0:03:26uh uh this gone for some geometric transformations
0:03:29and what we are trying to do is find a common reference pattern P
0:03:33and model be um input points
0:03:36uh uh is transformation of this common pattern P
0:03:39plus some uh ever try and this error time you i shows the deviation of
0:03:44the image you i from the construct mind for
0:03:47and uh we assume that you know the type of transformations for instance you know bidders rotation translation scale that's
0:03:53a drought but
0:03:54still we to re just their input that the that means we need to compute a vector along the i
0:03:58for each of input image
0:04:01and then we use this idea phone construct thing P is a combination of some uh
0:04:07so P equals the sum of atoms a J base of by this collection of C J
0:04:12and we also assume that use that sums come from a parametric dictionary that means
0:04:16each atom in a dictionary
0:04:18is a a geometrically transformed version of an an i'm not function so mother functions from by five here
0:04:24this is a a a a a marshal so the geometric transformation
0:04:28and some possible a little uh some examples for this on and will uh a generating mother function could be
0:04:33a process cost and motor function or
0:04:36an isotropic refinement but or functions from by a and R
0:04:39and here you see some at some that are um the i form house thousand motor function to some geometric
0:04:47and um here is the formation of this month for fitting problem
0:04:51so we like to minimize the total distance of our input images to construct the money full we shall we
0:04:57and and we want to we would like to uh it she'll just by picking a subset
0:05:02all the atoms in the dictionary slow not P us these A J that comes a G R and
0:05:07also optimized the for options of these atoms
0:05:10such that this total distance that are he is mean
0:05:15the uh and you know next case of this uh read out from that we propose
0:05:19so we first so choose arbitrarily and that to mean the dictionary
0:05:24a suitable one and then be set that part pattern P
0:05:28uh and then we compute the projection of are input images on the money
0:05:33and then here the main loop now all uh at each iteration we select and at some at a and
0:05:38the coefficients C
0:05:39such that we reduce the errors
0:05:42and then we at this at some our pattern
0:05:46so this this based on my fault
0:05:48and an now the money for that it is a very compute the projections of are uh input image of
0:05:53on them i if what and then we continue this loop
0:05:56and till the the data approximation error is minimal
0:06:00and now how to be a of the minimisation of this error are still i'm fortunes as error has a
0:06:04complicated the panels on the at and option
0:06:07and is for the following reason uh let's imagine that we are now in the j-th iterations of the already
0:06:12have a computer this manifold and P J A lines one
0:06:15and so if you take an input image you why i mean its projection
0:06:20smile of that's already compute so we know the parameter vector or but i corresponding
0:06:24i mean were when the a minor followed by adding and you want
0:06:28it's projection point no change
0:06:30and most probably will correspond to a parameter vector number i pride which is a a different from um by
0:06:36and we don't know what this number by prime
0:06:38will be
0:06:39uh but if we right down the total distance used in that it depends on this uh a will real
0:06:45new you of the parameter vector by prior so
0:06:47that's this it's not use it to um
0:06:49minimize directly this uh error E
0:06:52so we uh defined an approximation you have
0:06:55of of know instead of we minimize this you
0:06:58and then what is the C had it is just the sum of the kind and this distance a little
0:07:02imp point to the new my fall
0:07:04and and time and this as as as follows we had a new manifold now and we obtain a first
0:07:09order approximation of this money there on the projection points that are already or
0:07:14and then the change in the sense of you i for this manifold is just the this distance between you
0:07:19Y and a
0:07:20uh a first order approximation
0:07:23so uh
0:07:24actually be do something pretty straightforward to minimize the that we just to each of the atoms of addiction or
0:07:29one by one
0:07:30and for each at an we find we compute the optimum options see that minimize the stereo tab
0:07:36and if we you right this you had as a function of C
0:07:39um we see that is a it's in the form of a racial function that means this function at a
0:07:44i and G I's are on my meals of C
0:07:47so in general um
0:07:49such a function
0:07:50has several local minima
0:07:52and it where we can seen in practice a experiments we have seen that it is also in most most
0:07:57of the time is possible to minimise you that just by a simple a and the sound out or two
0:08:01is not that
0:08:02extreme complicated function in practice
0:08:05so um we try
0:08:06each at and can compute all the local options uh and then in of all the atoms if we the
0:08:11best one
0:08:12that you small star
0:08:14then we add the this at some to the new cut and uh by its uh optimal corruption
0:08:19and you repeat the use of course
0:08:22so now um some experiments for some on for and a later on
0:08:27and in this experiment we use a transformation model of of uh we use the transformation manifold model of the
0:08:32mansion three so we have uh
0:08:35rotation and then it would be to two national translation
0:08:38uh so we can generate a the syntactical path and by adding some loss in and a and are i
0:08:44and uh so we construct a different data sets from this at some uh each dataset consists of some random
0:08:50geometric transformations of this the synthetic that pattern
0:08:53and you have a four out to each data of that that uh it it is a uh gaussian noise
0:08:58for noise variances for sports data set
0:09:00and we use that dictionary consisting of some cost in them the R
0:09:05so um here you see the data approximation error or lot that just like the noise variance
0:09:10so i approximation error is the total squared distance of input images
0:09:14the computed my
0:09:15is see that it's uh it is it has a a linear variation
0:09:19like to noise variance which is an expected result
0:09:22uh uh have are if you pay attention here does just line doesn't pass from the origin so this actually
0:09:28re we'll the error of the algorithm
0:09:30and there are two main source of though
0:09:32uh for this error of all is that use a grid out them and it doesn't have an optimal performance
0:09:38and secondly we use a dictionary of
0:09:40fine size
0:09:41that's the discrete or this also introduce some there
0:09:46and uh experiment sometime in it
0:09:49this time we use the four dimensional transformation model because we also have a you changed um in the
0:09:55um a as
0:09:56and is they are uh we use a hundred to the geometric to transforms
0:10:01hundred five
0:10:03and use a similar dictionary so on the left you see some of the sound of they in the experiment
0:10:08and on the right so uh you see the patch that we obtain the twenty four at
0:10:13so it looks like a five digit that sure about the characters
0:10:16digits five um despite the variation
0:10:20the they does that
0:10:22and also uh some uh for some numerical comparison we have compared to some rec
0:10:27and we have use this error measure a measure which is a the data approximation error
0:10:32so in the first to uh a reference is that have again computed
0:10:37progressive approximations of the uh are designed
0:10:40so in the first one we have applied matching force on a typical are in the data that the average
0:10:46are here and we have chosen it to be
0:10:48the input data out it close as
0:10:50to the centroid of all and i say
0:10:55and uh in the second one we have applied simultaneous matching pursuit on or a line
0:11:00to achieve that
0:11:01sparse uh
0:11:04and we don't
0:11:06and finally as order approach like everyone provide a comparison between our method and uh
0:11:12classical manifold learning
0:11:14and it doesn't on that in some of the typical manifold learning algorithms they make use of the assumption that
0:11:19data has a local in your be or on the mind
0:11:22so we just uh a compute the this uh a local linear manifold approximation error
0:11:27is the sum of
0:11:29E i one E i
0:11:31uh the distance between a point you Y
0:11:34and the plane thing from the nearest neighbor
0:11:39oh um you see that are lots here are so the move of is the transformation invariant matching proof of
0:11:46word that we have proposed so we get the best or performance
0:11:50um we see that the red of corresponds to matching pursuit on average but
0:11:54a if i and it's as that
0:11:56okay so to do that and the data that that that for all
0:12:00you know like you're
0:12:02i that that that would be a lot but
0:12:04is it or not
0:12:06and this station is and that the one time and the patterns are um
0:12:10when we have applied simultaneous
0:12:12a a sparse
0:12:13estimation of that
0:12:14such P
0:12:17and finally some experiments on
0:12:19face image this time
0:12:20this time at high dimensional the because we have an an isotropic scaling
0:12:24and we have used some
0:12:26um face images of the same subject but we also uh i had some
0:12:30but uh in the data set and some variation of facial expression that we don't not model
0:12:36things like uh a facial expression variations but
0:12:38these things are are rather close there that the source of the deviation from the computed manifold
0:12:44and uh uh here on the right so the that some they like can from the data set of on
0:12:48the right to face them me that we have computed
0:12:50so it looks
0:12:51more or less like the phase of the same person
0:12:53there is also some kind of averaging and
0:12:55facial expression and uh
0:12:57we you have a doubt that all lesions
0:13:00and um if you look at the error loss we see here that
0:13:04so okay K even if is still get the best error for from a uh we can see here that
0:13:08the and and in is some people's the perform about
0:13:11this is because the number of variation
0:13:13then the face image of the same person are
0:13:16what's smaller and compared to the micro variation the hand
0:13:21that typical people pattern of the data set
0:13:23like to approximate that all patterns
0:13:26i mean there and if you look at this uh that line as locally in or approximation or is pretty
0:13:32and very for this is that the data uh do we have just use thirty five of just so the
0:13:36data is sparse the sample on the my fault
0:13:39the local linearity assumption that hold the anymore
0:13:42you had to
0:13:45so um to a little bit have present presented the method to the for transformation and rent sparse approximation of
0:13:51a set of signals
0:13:52we are we have built a representative pattern with the grid out some by a parametric atom selection
0:13:58and the complexity of the matter a method that we propose a
0:14:01changes linearly with respect to the number of atoms in the dictionary
0:14:06as a linear with respect to the number of images and the input that
0:14:09and it has a corn of the panels on the notion of the mind for the image resolution
0:14:15a there are um we have shown in another work that
0:14:18uh under some assumptions on the transformation model
0:14:22and also the structure of the dictionary we can it cheap a joint optimization of the at parameters
0:14:28and uh the functions C
0:14:29so in this case uh we optimize on the continuous dictionary might of fall rather than a
0:14:36fixed dictionary a
0:14:37speech uh at samples
0:14:40and in this case uh we get rid of just
0:14:42for star here we don't have a uh a the depends a number of because the local jurisdiction
0:14:50so um is a
0:14:51final remark a um are right
0:14:54can related to to as in general one is sparse signal approximation of and the other is a all learning
0:15:00so um what's that we gained over sparse signal approximation at like and P S and E
0:15:05it is that we H you uh in a variance to geometric transformations of the data of you you we
0:15:10use a transformation manifold model
0:15:13on the other hand the and on to as we have over classical month learning algorithms are the following
0:15:18a first of all we provide an article model for the data and that has a nice properties like a
0:15:23it's the french it's move
0:15:25it is also used to call the take the
0:15:28parametric atoms
0:15:29it a L the end generation need they on the manifold
0:15:32and finally it has that it can still work if uh the something of that database
0:15:38whereas as um
0:15:39many need fall that work and would require a much
0:15:44so uh that's all and take you very much for function
0:15:52thank you
0:15:54as a first
0:16:05yeah that's it the best to extract that um
0:16:09for time read
0:16:10actually what we do is we on
0:16:13minimize mean Z V in one as an approximation yeah
0:16:17so we do a oh so at each iteration okay we minimize the if that's that's T
0:16:22but as E that is not equal to you that's one reason the second reason
0:16:28so it and if you mean my is
0:16:31this is one of the projection points change
0:16:34the forty two reasons uh a menu do this optimization thing you want to a guarantee that you will reduced
0:16:40so but we do in practice of that
0:16:42uh okay so we try this pick the best that some of we want to the project and than on
0:16:46be check if there are set of it just um we are fine accounting a if the error you don't
0:16:51the green
0:16:52the we try and reckon at them like don't to pick the best one but pick the second best one
0:16:57and then tried
0:16:58a but we we all
0:16:59well of course they are able to uh
0:17:03but a set up a date
0:17:04only if the error is it just so
0:17:06since the V we reduced the error E he for sure and in each iteration and so uh it has
0:17:12a lower bound and that it has to converge at some point
0:17:16oh for situation
0:17:21what do you
0:17:28yeah i i that um
0:17:29i i think in whatever may you define fine of i mean whatever kind of transformation you can there i
0:17:36think as long as a um you did find this error E and this like like double distance of them
0:17:40but that you for it
0:17:42to the degree that each iteration
0:17:45yeah so if you use degrees and a function that is lower bound that uh it means that a test
0:17:50to code word after a while
0:17:52is monotonically decreasing function
0:17:57as a in
0:18:01i seven it depends on also a should be you have to to be used for the the dictionary
0:18:12you you need a note the is actually that it to to play even if you do the meeting
0:18:18you try to would
0:18:21yeah like your that that's it
0:18:24so um so that it's a question about dictionary learning i guess um we have a on anything like a
0:18:31i mean doing something like a C you like case we get to optimize that
0:18:35one reason for this is that we really would like to
0:18:38to to been a parametric forms of all
0:18:41uh a we need them to be differentiable function because we're talking about ten just to the might so they
0:18:47just can't be an arbitrary function
0:18:49so that than this a but uh this think that i have mentioned here
0:18:54finally this
0:18:56the for all
0:18:58kind of such as this field of addiction learning because here we have a a dictionary of money for and
0:19:03not what we do is
0:19:04you optimized
0:19:06a on the big show mind fall that are you optimise the parameters of the atoms
0:19:10this is
0:19:12can related to a lot
0:19:14but we i consider a differentiable uh
0:19:18at like a in a and i to be any the french on article function so it's gender can that's
0:19:24yeah but is not learn from the they are no we with that
0:19:29as wish to
0:19:33and that was to uh you said that yeah actually used to as the fact that the sparse approach
0:19:39but the a D do actually they will go explicitly use is the constraint in your uh optimization
0:19:47so yeah and question is there a house as is it depending on your T and know how were how
0:19:52do you think this into account the in to you are uh
0:19:55uh optimization problem
0:19:57we have introduced the
0:19:59that and L one norm or or or or no we don't take it this like to that
0:20:04you oh
0:20:06sparse sparse the they'd sure talking about is and which main
0:20:09so here and uh we there's sparsity in a a times of these
0:20:13dictionary atoms that we use so we have to
0:20:25yeah so here
0:20:26uh uh you have
0:20:28J O D is that some so if K is much smaller and and number of cells that you have
0:20:33and the in which is done this pattern P is
0:20:35sparse in this domain just consisting of or
0:20:38a or a an hour
0:20:39so um the made that you stick here is that look okay you can do that like
0:20:44okay and not take uh fifty atoms
0:20:46i keep the best fifty the atoms and um
0:20:49use yeah head and it's
0:20:50parsons and
0:20:51okay approximation
0:20:57as question
0:20:59a you've not that again