K a uh had a of on and the thank you for it and that is saying to my presentation

of

max is very their is the regularization at a selection and uh basically it's about uh a of based estimations

generally

but uh we are going to

i i explain our results in the context of of uh you a estimation

and uh

for this reason

you a problem

and how we can solve by

uh a sparse representation

and then yeah

we address the regularization parameter uh estimation problem

and we introduce a method to solve uh this problem and finally we compare the different method

oh

uh

particularly a problem we have a set of sources

and the

a set of uh a small area of sensors

and we are to uh estimated directions of the sources direction of arrival of the signals

uh to the sensors

only by having

uh

uh

the uh at the uh area point

so what our assumptions are that uh

the signals are narrow and and uh they are

and the the sources are very for

so that

we can write the uh there is a a a a

as the in your model of the sense that that so if you assume a source from a certain direction

fit

in the space

and we can write uh the received data

uh as a multiplication of a is to as you knowing the car which is dependent on the configuration of

this uh uh sensor are ready for a linear or and do with and that the at the

a a big to are we introduce here

so if we have any source

uh uh the total received that will be

uh uh

by a super per uh was super imposing posing a different elements will be a a linear combination of

uh uh the terms of which can be with in a matrix form

oh of course uh are we have a

always a noise

there

so are many ways to solve this problem

but uh and

many of them like subspace so as what we have able but a for uh the problem that the are

not general for example in one assumption option and one that are at case

it they fail

so uh uh the most general a solution is the loss of a solution

and that we are going to explain now

uh for uh a base estimation we need to discrete time this space

and the uh

then uh

we have some that the sources are uh

i mean uh we with a very fine estimation are you will get a a a a a a a

a estimation of so this is the quantization noise introduced by discrete i in the space

uh but

that

we introduce

many many in real sources for each grid point in the space

uh most of them are sending zero except to the the the sources that are we are interested in

so we are introducing a very lying uh a vector of sources

and most of them or zero

and then we can write a model that method by the previous

uh a a a a uh as a a linear combination of the source

and now a G is the vector a is that matrix use of all the steering vectors and a for

the degree

and it is very five trees

so even if there is no noise this model cannot be solve this get it because a is not a

veritable

so we need a a a a constraint obviously as about uh uh the number of sources number of sources

uh uh or uh a constrained to a a a to you know i mean the to signal is uh

a sparse and the number of men and nonzero elements are know

uh uh to express it in a general

and situation when we have many a snapshots is the usual way is that

uh we look at the

a extended sources

in the direction direction wise can we introduce

yeah

average average power from each direction of a space

given "'em" by their average energy of the sources from that direction for the direction that there's no source

yeah a it will be zero

so is your norm of uh uh this come a vector which is long in itself

uh gives the the number of source

then

the maximum likelihood

is to can be written as

this this is the usual max some like to this is

uh related to the model and the constrained about the number of nonzero L

so it's it's a hard to solve this problem is as hard as solving a use maximum like to in

in nineteen your

a square

and uh what

has been done is that you can write it in this uh

equivalent form and then

replace this zero norm which one or

and uh this is a a an approximation but it works

uh and the reason is that a which is explained by teacher and so

uh

the reason is that

with so in its optimization it's more probable to heat for uh this is

corners like here

uh in the diamond shape to the damage that uh

sort is introduced by the one or

so

if it's not this one

or you can buy use of and that so here we have a lot line one skit hand uh the

number of nonzero elements here

well it will see that on the controls a number of the sparsity of the source so uh if you

have a

more uh uh i i highest that higher value of long

and then we have a less source

the active uh in in the model so uh the question a would be phones estimation point of view

uh the question would be uh a how to choose the proper value of land

a for example in this example of we have a three

i sources and uh

probably this point should be chosen chosen by a method

but before going to the but a uh i need to

and reviews review some a a a points about the last so

so uh two

a estimate the number of sources if you want the simulation will get such an eight spectrum of or this

space

and then we need a special thing to choose which ones i active and which one is the psyche but

is not a problem because difference is very high

and uh

it the lasso so based estimation is a good estimate or or of the direction but uh it's biased for

or sources

and the

so well if you want to good estimation of the sources is that the product of the

i is the part of the main problem but

if you are interested in that then you have to solve a

and uh another estimation again by

assuming and known direction as and it will be a usual uh max and uh

usual

the uh it is a square

my

and now we are uh to the problem of the regularization parameter selection uh first like just

we use some concepts in a a estimation to especially for the maximum a posteriori estimator

so i uh if you write the log out

exactly

scale bombs

can maybe like

and uh the combination of the prime your

and the the model we have a

so uh the question of how to choose the prior

there are two ways to think about it a one way is that the the prior

is the

uh the point you're is given by the physics i i

a it's given by the nature and uh it's a part of the model

but the the way that uh the prime or is the two

two

apply some properties to the estimation

so how we uh uh do the in the second way of thinking

uh a to illustrate it uh

a will show this

uh graph which

well uh one of the axes

the

probably there are for an estimator

uh when the the but the values of true parameter values or to come one and S one

and the the other their as would be a problem to of error white this estimator

i when the true by do the parameters of to to to an S two and that there is a

tradeoff between a this to you cannot be good as a whole something

and by choosing the prior are actually moving the score

uh and the probably you need to a you want to be here if you don't have any preference

uh you want to be good

for all of them and that's why we choose the prior

usually a a a from prior what works well but that's always and

uh a here you can see that

if if a a are good and one at them to the case where than for example the noise

was a zero or the number of just go to infinity

then uh it's possible to get a better estimation the mute a new this man i mean a

at the same time you will get a good estimation for all the

uh

but

so uh one uh a hard uh example a the selection with which we are interested because we want to

know how to choose the right land uh which is equivalent to choosing the

right model order

uh uh and the uh and you can see uh a

choosing a uniform will be and over feet there has been a and

many

uh discussions is about how to choose a prior a good prior and that there are there have been some

would prior or as a based an asymptotic case

uh for for example for

no a but uh

number of snapshots

which given by a i I C or and the L

and here we introduce the and the uh

and which uh chooses the best model

uh as the model that describes the date i mean less number of bits

and this can be expressed is one

uh

a term is related to the number of bits

of the parameters and this one is the number of

they'd have it's

with respect to the

parameters we chose

and that you can see there is a relation between

a a maximum likelihood and

uh the

and here for our problem in be at given by

this for me a no

and

you at them all

as you can see for mean is not as work will work very about what uh for a low number

of snapshots doesn't work

so

uh

as we are are for a in the case of one a snapshot it's

tends to well were fit the model again

and yeah we need to one to choose and don't the prior

the good way to choose or a prior is to look at the last it's so because lots of works

so

probably uh it will it is it gives us a good prior as well so if you we compare the

maximum likelihood we a lot so

uh formulation you will see that

if we choose

uh a laplacian prior then

we can compare

i some likelihood we are K and then you can use the maximum likelihood or

uh choosing a that

it give this estimator or it is is actually

uh discuss in a in many papers that

this doesn't work

it actually works but the the problem is that only works for a high snrs where you are very close

to the case

where actually every prior may more

uh uh and the as you can see now it follows the right answer the minimum is related to the

true number of sources

but it as the tracker so

so what the problem the problem is that we are doing in a giving uh a high probability to not

desired values

uh uh actually a we we are not interested in be high dimensional uh speech but the laplacian prior gives

us a

uh

the the main problem the probability is given to very high dimension space

so remove that or

and we constraint the

a probability density function

the for the prior

to the don't a low dimensional space and is given my

this is not there's one

this an up there shown where the uh but only over

though that a low dimensional space that

and then uh solving for uh maximum likelihood would get such a a a a a a as which or

but at something and and six more

uh a deterministic but unknown parameters and then you can use

the rubber the estimation of sick man and

uh as explained before

to get an image

and as a result you can compare it to the and the uh

so we have a uh

the maps estimate i mismatch

by playing with the optimized actually

a a you have we can have many queen wasn't forms of a optimisation and all of them can can

interpret it in different friend

uh

by using in four

so uh

and uh i there is a long discussion the paper or i'm not going to be the that's really but

then

that you can see by plane with optimized the will get you different priors but all of them

or

working very well

uh but as you can see uh there is like slightly different related to

and

the prior your

that you play and you change the point in your career

but and and he does not work and it

probably works for

i mission

oh are there is a

uh uh at the conclusion a uh we have the less so can and uh

it's words

and

you can choose a uh two

i uh use the

you a model order selection but you also can choose the asymptotic here

uh you can choose uh uh the law that so as a

uh

prior

to

the the model order and

in this case what of what's much more it

so thank you

and

yeah so

yeah

maximum

this is me

less

but i guess you i think what right

that look at all probably to should be max maximum of problem to you

and arabic bic which are usually are

so you're using the mike the negative like

so you're writing then

mean

since your

discrete C

don't

uh why is it that you to detect search clear peaks

should be able to to sure should shouldn't

yeah by

cool fishing P R to kick

or in one of

much previous slide

yeah

uh

actually a that there there are some proof that

well

i

zero no and one or might but and some you but looks order to show that there is a low

correlations between the columns of your

a a or in this case your discrete choice in space so

to see mark are conditionally all right that's

actually i i i

well as you

proof that this case

right

image matrix

oh

from

that man

is K

do you we have a sparse

still

okay

and you have the question

i do have a question

oh it seems that you have some kind of mixture between uh that's a deterministic

approach and a bayesian approach so what why don't you go to yeah

a long to full by way assuming that

um guys

i by

we some

should

yeah

or way that

i i mean

i i the first approach that

and

right

prior you know

is a way to look at

but the it doesn't work and

in the literature are some paper that used for some or prior

and and it's by jeanne

a

but uh

i i want to say that uh

there is no way to think

you can go back what the back get and you always choose a of the prior you want is a

higher elevation

uh

oh at some extent should

i thought

okay thank you