so a would have to and everybody in this paper well we focus on particle filtering techniques for high guy mentions so such problems to here uh in a uh of there as kind of application areas do a group object tracking and the object or so in the paper we first uh preview we view recent works on group and extended object tracking uh then we present a of the sequential monte carlo framework and uh the core of these work used in the develop markov of chain uh monte carlo or particle field yeah uh which actually uh moves um a lot of particle right seem to more likely regions based on at step gradient projection a and we show that it works yeah you well for high dimensional all then we can yeah the performance of this field to uh with uh a sampling in both the importance resampling particle field yeah and we and spend it on field yeah and also what we go beyond we study the case when the date are a spot so in some uh of this uh can breeze them complications uh because uh the problem might become uh and observable and we compare with their compressive sampling um one feel there and finally i will lou this no uh with them well from open the question future work so that has been a lot of interest recently in a group and extended object tracking so based classification just a being uh it out so i but uh we can classify the the that the words in two to be groups um method uh that are four a small a number of group with the relatively small number of one and and sequential monte carlo methods for a hundreds or thousands the object so huge so for small groups i have to mention the the works of um some of the first works are i and and they let for up to twenty um object for an start no no they i right uh one uh michael chain model i'll i with um michael them deal uh a which is um a good the have low however then a a call and use group develop a range of a techniques uh bayesian not necessarily sequential monte carlo but one of the approach is used to look at this problem as um tracking and extended it object and uh estimating both the parameters of bench fashion uh you that's circle forms um wheel and then the problem reduces to finding the pension that the way they they sell B with the random matrices then there is a B group of each D field described in not a T P to do you D field and the whole range a a cute with a um a round them finite sets this the um recently there was them um a uh uh also based on sequential monte carlo and um the that a of ways uh uh yeah the uh i wish shall i N of this so we been also work by combining sequential monte carlo with vol random graph then uh are the case for lunch um number of but that is within the group is especially challenging um not only because of the the large dimension but also because you can estimate uh each if you the that yet so the weight this problem soap usually used by forming a last uh and then estimating the to have the class that and to the then so um the extended object tracking then um the down to a a a a a and um uh formulation a way you want to know where the center use and the then and then you need to both um state estimation problem and parameter estimation problem if be could people speak up to the two problems because you know part to things are not so good in parameter estimation especially when they are and i mean so for what the in these groups including in the work of all one uh estimate the parameters with this that than approach and the ages all yeah we they of meditation and then feed the for and it is in the the part of you they'll or another field yeah and then there is another interesting group of approaches is combining in in row techniques with the common field to or or um i'm not that that bayesian need uh and uh a also was on type process models and and so on so what we do in i'll work we focus on i i mention now um estimation problem nonlinear linear in general described with um that the general uh state space equations where the a state function is nonlinear uh and the uh the noise can be a non gaussian in general then um we assume the markovian property just dependent on the previous that and the measurement equation and uh in general is nonlinear and the noise of process can be a a non girls so just briefly we we are still a the bayesian estimation problem finding that both your ears they be yet based on the the yeah so actually still being um you um you might a way the chapman or of equation based on a that the particles and they are T uh their weights and the the bayesian update i the more everybody's but media with that so with in this time there's a sequential monte carlo what we we do we follow the prediction and update that still the prediction we have our uh hmmm of the transition prior actually in in you know in case we use it and then we find um these the pdf function and where we spread particles do to the noise and and uh in the like the updates that way when the measurement a hmmm comes then we folk of the particles uh by combined with the likelihood and there is sampling that just introduce the very right oh now what are we doing um we move um but the cloud of particles uh with um markov chain monte carlo at um method so we can um in that you know one of these solutions exist in the literature um a one can use metropolis case uh it's that like he as the one can generate particles in time K minus one based uh from this propose to distribution and then the new particles um in a time kate a can be john in to me no way then we simulate a sample X pine prime from the a joint a a probability density function uh and then where i sprite drawn from these um transition prior and the previous expand K minus one is uniformly a drawn from from an imperial empirical or distribution so with in this question of the metropolis hastings algorithm one accept or reject the um a a new candidate there um when this condition is satisfied so you the you for me uh generated a random number these less or you you than i'll but i mean a and this likelihood ratio then we at said that yeah otherwise we we reject um this is um oh would algorithm but when the measurement when the state noise he's is relatively small then the moves can be rather small they are i there are uh improvements recent the a suggested by uh simon i'm and chords you and she's group where one can combine metropolis case things with the gibbs sampler and one thing see that there is a much better mixing scene uh especially for large group yeah i other algorithms like mcmc sampler as um i'm not of seven than group where one can generate a T V and what we do he's uh i'll show on the next slide we uh use subgradient gradient information of the likelihood you know to to move part goals in um more likely region so we we can have some some X i K prime um propagated to these joint pdf and then we calculate that that we see and so based on the logarithm of the likelihood function so these the is that what she subgradient norm to read uh from and the like who about the lady uh in in uh the part though X prime and we can have a relic station parameter which is um can be uniformly sampled or samples from a uniform distribution or or can be a adapt be chosen in in some way and i actually the means is the of performance but the out with them then we form the regularized uh proposal like a um a gaussian mixture and then um the metropolis that case since it's set and problem be set that's probability is formed based on a nice rue and then um we accept or reject um they are yeah um samples based uh on oh one can um yeah this proposed algorithm with that the large amounts of a um random walk outside of a markov chain monte carlo methods where um we one can achieve a similar effect but in our case the sterilisation and a can have negative values and also we we restrict the uh within that then develop between so you want to the R theoretical results are shown in um the anti sample agendas for convex uh log likelihood functions B uh performance of these sub gradient projection uh technique has been um valued at all where um well known example but uh with forty states and one handed state um and yeah yeah uh results um where we O calculate the normal i the average no estimation error actually this two-state minus um the estimate so now want to these uh different all the no two of the actual state um bases for um up to forty um state and he can see them there is then of these um averaged norm error um between the a subgradient projection markov chain monte carlo method then well we have the sampling importance resampling particle filter and then come on you the word for then um they what that that they are now we we focus the attention all of um the performance of this out with them when we have a lamb down a parameter alternating and uh as we can see uh one can achieve a meeting than the better performance compared with that the mcmc when these is a regularization parameter a use drawn from a uniform distribution and also compared to the spend common you also one can see that one had them much higher acceptance ratio ratio with the alternating in uh and C and C uh i'll with them from yet with the a when mom that drawn from the uniform distribution that's and that the uh it it in a case where but we have a unique sample this is actually the random multi variable random them walk model uh with hundred states and what we show that um the alternating mcmc um you it uh uh uh can reach a white the um i i Q is C which is comparable with them um and to that which is the optimal solution for that case a then i'm not a interesting problem is when the data is um sparse and you know um that they had been a lot of freeze that's uh used the in that area that that but um um rest and seen or compressive sensing has been point by the no hold in two thousand six a there are a lot of uh works real at uh with the linear case um it is by noble also because um if you use that we um which works when we we have a limited amount of data so we know um the shannon sampling theorem is eighteen that a we can we of a a a signal completely eat the the sampling frequency is that be go twice bigger than the maximum frequency of a signal a web but you the years that that a possible if if this condition is by lady uh we can recover cover of there is sparse signal thanks to uh the compressed sensing a a theoretical derivations uh the problem boils down to an optimized a which are initially was for uh normal eight uh a L zero which is a non polynomial caught problem but then you was a for the of um as an optimized patient problem and a minimum of minimization of known that one we want to recover the signal X one palms when we have mentioned a measurement vector which they mention use much smaller than the dimension of that the state vector and uh this is possible if two conditions that are satisfied one is sparsity so X has enough nonzero components S he's a measure and then the the second one is not in common jen at um then H the matrix H head uh its columns subset of size let into S and we compare uh the key performance of the step gradient projection markov chain monte carlo met that with a recently developed um compressive sampling a and you yeah that by a uh would feel and and it's the um and um actually the problem reduces sees uh to to the minimization yeah of these norm wet such that the a mathematical expectation of norm to of the air are used um bound V by a that the number so what is the problems here one of the difficulties he's when the signal response this effect gives the ability of the system and one might have yeah on the the bill many of the state uh in the example we can see the hundred states but to the observations but uh times that and uh the hmmm and to and and G the regular common and the cannot work in these conditions maybe maybe because of the lack of of the ability and uh let's see what find results we have so this is the signal um we bars that and on the right can said this is a measure of complexity um but a me is a perfectly is boss where is but a symbol and he of these that different real i realise this is with um more noise then um for the example and unique example with and hundred states uh this is the result we get this is there common field to this is the compressed C uh sensing of common you the uh and it seems that the step gradient projection a markov chain monte carlo method has a performance of close to to the compressive sampling common field to which is the optimal one when the of the of the B conditions are not fully um respect oh and let me conclude this torque so um this work keys uh propose used a new a markov chain monte carlo map that at that uh in at the performance of uh a sequential monte the colour you this by moving this time in into more highly uh more likely regions base or on uh a subgradient projection need and well we we compare it um we the several uh the uh well known field as so in this work we actually a propose proposed a a uh proposal propose a function uh a i accuracy is achieved and in future we would like to to look at more complex examples um more related to to group and that then then the direct tracking i don't known to so yes oh course of to that i is some yes it's hmmm oh well i think so it is because when you use the gradient you push than in in a you know but the direction and then uh you improve the accuracy that way otherwise to there might be a lot of the sampled that that deeply T so we'll should i was to children should should to demonstrate a very good question that if there are not a in the paper of but um ah it is of use a switch or yeah do oops or two i is so so oh no sure useful as i i just see i which are usually the propose a small training all the proposal true yeah but it will be a related to also with the application speced that maybe be force that the group of method one type of proposed a we work better the yeah Q i