0:00:14K a uh had a of on and the thank you for it and that is saying to my presentation
0:00:19of
0:00:20max is very their is the regularization at a selection and uh basically it's about uh a of based estimations
0:00:27generally
0:00:28but uh we are going to
0:00:31i i explain our results in the context of of uh you a estimation
0:00:35and uh
0:00:37for this reason
0:00:39you a problem
0:00:40and how we can solve by
0:00:43uh a sparse representation
0:00:46and then yeah
0:00:48we address the regularization parameter uh estimation problem
0:00:52and we introduce a method to solve uh this problem and finally we compare the different method
0:01:00oh
0:01:02uh
0:01:03particularly a problem we have a set of sources
0:01:06and the
0:01:07a set of uh a small area of sensors
0:01:10and we are to uh estimated directions of the sources direction of arrival of the signals
0:01:16uh to the sensors
0:01:18only by having
0:01:19uh
0:01:20uh
0:01:22the uh at the uh area point
0:01:25so what our assumptions are that uh
0:01:28the signals are narrow and and uh they are
0:01:30and the the sources are very for
0:01:33so that
0:01:35we can write the uh there is a a a a
0:01:38as the in your model of the sense that that so if you assume a source from a certain direction
0:01:44fit
0:01:44in the space
0:01:46and we can write uh the received data
0:01:49uh as a multiplication of a is to as you knowing the car which is dependent on the configuration of
0:01:54this uh uh sensor are ready for a linear or and do with and that the at the
0:01:59a a big to are we introduce here
0:02:02so if we have any source
0:02:04uh uh the total received that will be
0:02:07uh uh
0:02:10by a super per uh was super imposing posing a different elements will be a a linear combination of
0:02:15uh uh the terms of which can be with in a matrix form
0:02:19oh of course uh are we have a
0:02:21always a noise
0:02:23there
0:02:26so are many ways to solve this problem
0:02:28but uh and
0:02:30many of them like subspace so as what we have able but a for uh the problem that the are
0:02:35not general for example in one assumption option and one that are at case
0:02:38it they fail
0:02:39so uh uh the most general a solution is the loss of a solution
0:02:45and that we are going to explain now
0:02:47uh for uh a base estimation we need to discrete time this space
0:02:51and the uh
0:02:52then uh
0:02:54we have some that the sources are uh
0:02:57i mean uh we with a very fine estimation are you will get a a a a a a a
0:03:02a estimation of so this is the quantization noise introduced by discrete i in the space
0:03:06uh but
0:03:08that
0:03:09we introduce
0:03:11many many in real sources for each grid point in the space
0:03:15uh most of them are sending zero except to the the the sources that are we are interested in
0:03:21so we are introducing a very lying uh a vector of sources
0:03:25and most of them or zero
0:03:28and then we can write a model that method by the previous
0:03:31uh a a a a uh as a a linear combination of the source
0:03:36and now a G is the vector a is that matrix use of all the steering vectors and a for
0:03:40the degree
0:03:41and it is very five trees
0:03:43so even if there is no noise this model cannot be solve this get it because a is not a
0:03:48veritable
0:03:50so we need a a a a constraint obviously as about uh uh the number of sources number of sources
0:03:55uh uh or uh a constrained to a a a to you know i mean the to signal is uh
0:04:00a sparse and the number of men and nonzero elements are know
0:04:04uh uh to express it in a general
0:04:06and situation when we have many a snapshots is the usual way is that
0:04:10uh we look at the
0:04:12a extended sources
0:04:14in the direction direction wise can we introduce
0:04:18yeah
0:04:19average average power from each direction of a space
0:04:22given "'em" by their average energy of the sources from that direction for the direction that there's no source
0:04:27yeah a it will be zero
0:04:29so is your norm of uh uh this come a vector which is long in itself
0:04:33uh gives the the number of source
0:04:37then
0:04:38the maximum likelihood
0:04:39is to can be written as
0:04:42this this is the usual max some like to this is
0:04:46uh related to the model and the constrained about the number of nonzero L
0:04:52so it's it's a hard to solve this problem is as hard as solving a use maximum like to in
0:04:57in nineteen your
0:04:58a square
0:04:59and uh what
0:05:00has been done is that you can write it in this uh
0:05:04equivalent form and then
0:05:06replace this zero norm which one or
0:05:10and uh this is a a an approximation but it works
0:05:13uh and the reason is that a which is explained by teacher and so
0:05:17uh
0:05:19the reason is that
0:05:20with so in its optimization it's more probable to heat for uh this is
0:05:25corners like here
0:05:27uh in the diamond shape to the damage that uh
0:05:31sort is introduced by the one or
0:05:34so
0:05:34if it's not this one
0:05:37or you can buy use of and that so here we have a lot line one skit hand uh the
0:05:41number of nonzero elements here
0:05:43well it will see that on the controls a number of the sparsity of the source so uh if you
0:05:47have a
0:05:48more uh uh i i highest that higher value of long
0:05:51and then we have a less source
0:05:53the active uh in in the model so uh the question a would be phones estimation point of view
0:05:58uh the question would be uh a how to choose the proper value of land
0:06:03a for example in this example of we have a three
0:06:06i sources and uh
0:06:08probably this point should be chosen chosen by a method
0:06:12but before going to the but a uh i need to
0:06:15and reviews review some a a a points about the last so
0:06:19so uh two
0:06:20a estimate the number of sources if you want the simulation will get such an eight spectrum of or this
0:06:26space
0:06:26and then we need a special thing to choose which ones i active and which one is the psyche but
0:06:31is not a problem because difference is very high
0:06:34and uh
0:06:36it the lasso so based estimation is a good estimate or or of the direction but uh it's biased for
0:06:41or sources
0:06:42and the
0:06:44so well if you want to good estimation of the sources is that the product of the
0:06:48i is the part of the main problem but
0:06:50if you are interested in that then you have to solve a
0:06:54and uh another estimation again by
0:06:56assuming and known direction as and it will be a usual uh max and uh
0:07:01usual
0:07:02the uh it is a square
0:07:04my
0:07:05and now we are uh to the problem of the regularization parameter selection uh first like just
0:07:11we use some concepts in a a estimation to especially for the maximum a posteriori estimator
0:07:16so i uh if you write the log out
0:07:18exactly
0:07:19scale bombs
0:07:22can maybe like
0:07:25and uh the combination of the prime your
0:07:27and the the model we have a
0:07:29so uh the question of how to choose the prior
0:07:32there are two ways to think about it a one way is that the the prior
0:07:35is the
0:07:38uh the point you're is given by the physics i i
0:07:41a it's given by the nature and uh it's a part of the model
0:07:44but the the way that uh the prime or is the two
0:07:47two
0:07:48apply some properties to the estimation
0:07:52so how we uh uh do the in the second way of thinking
0:07:56uh a to illustrate it uh
0:07:59a will show this
0:08:00uh graph which
0:08:01well uh one of the axes
0:08:03the
0:08:03probably there are for an estimator
0:08:05uh when the the but the values of true parameter values or to come one and S one
0:08:10and the the other their as would be a problem to of error white this estimator
0:08:14i when the true by do the parameters of to to to an S two and that there is a
0:08:17tradeoff between a this to you cannot be good as a whole something
0:08:21and by choosing the prior are actually moving the score
0:08:25uh and the probably you need to a you want to be here if you don't have any preference
0:08:31uh you want to be good
0:08:32for all of them and that's why we choose the prior
0:08:36usually a a a from prior what works well but that's always and
0:08:41uh a here you can see that
0:08:43if if a a are good and one at them to the case where than for example the noise
0:08:46was a zero or the number of just go to infinity
0:08:49then uh it's possible to get a better estimation the mute a new this man i mean a
0:08:55at the same time you will get a good estimation for all the
0:08:58uh
0:08:59but
0:09:01so uh one uh a hard uh example a the selection with which we are interested because we want to
0:09:06know how to choose the right land uh which is equivalent to choosing the
0:09:10right model order
0:09:11uh uh and the uh and you can see uh a
0:09:14choosing a uniform will be and over feet there has been a and
0:09:18many
0:09:19uh discussions is about how to choose a prior a good prior and that there are there have been some
0:09:24would prior or as a based an asymptotic case
0:09:27uh for for example for
0:09:28no a but uh
0:09:30number of snapshots
0:09:31which given by a i I C or and the L
0:09:35and here we introduce the and the uh
0:09:37and which uh chooses the best model
0:09:40uh as the model that describes the date i mean less number of bits
0:09:44and this can be expressed is one
0:09:47uh
0:09:47a term is related to the number of bits
0:09:50of the parameters and this one is the number of
0:09:53they'd have it's
0:09:55with respect to the
0:09:56parameters we chose
0:09:58and that you can see there is a relation between
0:10:01a a maximum likelihood and
0:10:03uh the
0:10:05and here for our problem in be at given by
0:10:08this for me a no
0:10:10and
0:10:10you at them all
0:10:12as you can see for mean is not as work will work very about what uh for a low number
0:10:16of snapshots doesn't work
0:10:19so
0:10:20uh
0:10:22as we are are for a in the case of one a snapshot it's
0:10:25tends to well were fit the model again
0:10:28and yeah we need to one to choose and don't the prior
0:10:32the good way to choose or a prior is to look at the last it's so because lots of works
0:10:36so
0:10:36probably uh it will it is it gives us a good prior as well so if you we compare the
0:10:41maximum likelihood we a lot so
0:10:43uh formulation you will see that
0:10:45if we choose
0:10:49uh a laplacian prior then
0:10:51we can compare
0:10:55i some likelihood we are K and then you can use the maximum likelihood or
0:10:59uh choosing a that
0:11:02it give this estimator or it is is actually
0:11:04uh discuss in a in many papers that
0:11:08this doesn't work
0:11:09it actually works but the the problem is that only works for a high snrs where you are very close
0:11:15to the case
0:11:15where actually every prior may more
0:11:18uh uh and the as you can see now it follows the right answer the minimum is related to the
0:11:23true number of sources
0:11:25but it as the tracker so
0:11:27so what the problem the problem is that we are doing in a giving uh a high probability to not
0:11:32desired values
0:11:33uh uh actually a we we are not interested in be high dimensional uh speech but the laplacian prior gives
0:11:39us a
0:11:40uh
0:11:41the the main problem the probability is given to very high dimension space
0:11:46so remove that or
0:11:48and we constraint the
0:11:49a probability density function
0:11:51the for the prior
0:11:52to the don't a low dimensional space and is given my
0:11:58this is not there's one
0:11:59this an up there shown where the uh but only over
0:12:02though that a low dimensional space that
0:12:04and then uh solving for uh maximum likelihood would get such a a a a a a as which or
0:12:09but at something and and six more
0:12:12uh a deterministic but unknown parameters and then you can use
0:12:16the rubber the estimation of sick man and
0:12:18uh as explained before
0:12:22to get an image
0:12:23and as a result you can compare it to the and the uh
0:12:27so we have a uh
0:12:29the maps estimate i mismatch
0:12:31with at that is made
0:12:33by playing with the optimized actually
0:12:35a a you have we can have many queen wasn't forms of a optimisation and all of them can can
0:12:40interpret it in different friend
0:12:42uh
0:12:44by using in four
0:12:45so uh
0:12:47and uh i there is a long discussion the paper or i'm not going to be the that's really but
0:12:51then
0:12:52that you can see by plane with optimized the will get you different priors but all of them
0:12:57or
0:12:58working very well
0:13:00uh but as you can see uh there is like slightly different related to
0:13:05and
0:13:07the prior your
0:13:08that you play and you change the point in your career
0:13:11but and and he does not work and it
0:13:13probably works for
0:13:15i mission
0:13:16oh are there is a
0:13:18uh uh at the conclusion a uh we have the less so can and uh
0:13:23it's words
0:13:24and
0:13:24you can choose a uh two
0:13:26i uh use the
0:13:28you a model order selection but you also can choose the asymptotic here
0:13:32uh you can choose uh uh the law that so as a
0:13:34uh
0:13:35prior
0:13:36to
0:13:37the the model order and
0:13:38in this case what of what's much more it
0:13:40so thank you
0:13:42and
0:13:49yeah so
0:14:01yeah
0:14:02maximum
0:14:06this is me
0:14:07less
0:14:08but i guess you i think what right
0:14:10that look at all probably to should be max maximum of problem to you
0:14:15and arabic bic which are usually are
0:14:17so you're using the mike the negative like
0:14:20so you're writing then
0:14:21mean
0:14:29since your
0:14:30discrete C
0:14:31don't
0:14:32uh why is it that you to detect search clear peaks
0:14:35should be able to to sure should shouldn't
0:14:37yeah by
0:14:38cool fishing P R to kick
0:14:43or in one of
0:14:45much previous slide
0:14:47yeah
0:14:48uh
0:14:49actually a that there there are some proof that
0:14:52well
0:14:55i
0:14:55zero no and one or might but and some you but looks order to show that there is a low
0:15:00correlations between the columns of your
0:15:03a a or in this case your discrete choice in space so
0:15:06to see mark are conditionally all right that's
0:15:09actually i i i
0:15:10well as you
0:15:12proof that this case
0:15:14right
0:15:15image matrix
0:15:16oh
0:15:17from
0:15:18that man
0:15:19is K
0:15:20do you we have a sparse
0:15:22still
0:15:24okay
0:15:26and you have the question
0:15:31i do have a question
0:15:33oh it seems that you have some kind of mixture between uh that's a deterministic
0:15:38approach and a bayesian approach so what why don't you go to yeah
0:15:42a long to full by way assuming that
0:15:44um guys
0:15:46i by
0:15:47we some
0:15:48should
0:15:49yeah
0:15:50or way that
0:15:51i i mean
0:15:52i i the first approach that
0:15:54and
0:15:56right
0:15:57prior you know
0:15:59is a way to look at
0:16:00but the it doesn't work and
0:16:02in the literature are some paper that used for some or prior
0:16:06and and it's by jeanne
0:16:08a
0:16:09but uh
0:16:13i i want to say that uh
0:16:15there is no way to think
0:16:16you can go back what the back get and you always choose a of the prior you want is a
0:16:20higher elevation
0:16:22uh
0:16:23oh at some extent should
0:16:25i thought
0:16:27okay thank you