0:00:13so
0:00:13a would have to and everybody
0:00:16in this paper well we focus on particle filtering techniques for high guy mentions
0:00:22so such problems to here uh in a uh of there as kind of application areas
0:00:28do a group object tracking and the object or
0:00:33so in the paper we first uh preview we view recent works
0:00:38on group and extended object tracking
0:00:41uh then we present a of the sequential monte carlo framework
0:00:46and
0:00:46uh the core of these work used
0:00:49in the develop
0:00:51markov of chain
0:00:52uh monte carlo or particle field yeah
0:00:55uh which actually uh moves
0:00:58um
0:00:59a lot of particle
0:01:01right seem to more likely regions based on
0:01:04at step gradient projection
0:01:06a and we show that it works
0:01:09yeah you well for high dimensional all
0:01:13then we can yeah the performance of this field to uh with uh a sampling in both the importance resampling
0:01:20particle field yeah
0:01:21and we and spend it on field yeah
0:01:25and also what we go beyond
0:01:28we study the case when
0:01:30the date are a spot
0:01:32so in some uh of this uh can breeze them complications uh because uh
0:01:38the problem
0:01:38might become
0:01:39uh and observable
0:01:41and we compare with their compressive sampling um one feel there
0:01:46and finally i will
0:01:48lou this no uh with them
0:01:50well
0:01:52from open the question
0:01:54future work
0:01:56so that has been a lot of interest recently in a group and extended object tracking so based classification just
0:02:04a being uh
0:02:06it out
0:02:07so i
0:02:08but uh we can classify the
0:02:11the that the words in two to be groups
0:02:13um
0:02:14method
0:02:15uh that are four
0:02:16a small
0:02:17a number of group with the relatively small number of one and
0:02:21and
0:02:22sequential monte carlo methods for
0:02:24a hundreds or thousands
0:02:26the object
0:02:27so huge
0:02:29so for small groups i have to mention the the works of um
0:02:34some of the first works are
0:02:36i and and they let
0:02:38for up to twenty um
0:02:40object
0:02:41for an start
0:02:43no no they i
0:02:44right
0:02:45uh
0:02:46one uh michael chain model
0:02:48i'll
0:02:49i with um
0:02:50michael them deal
0:02:52uh a which is um
0:02:54a good the have
0:02:56low however
0:02:58then
0:02:58a a call and use group develop a range of
0:03:01a techniques
0:03:02uh
0:03:03bayesian not necessarily sequential monte carlo but
0:03:07one of the approach is used to look at this problem as
0:03:10um
0:03:12tracking and extended it object
0:03:15and uh estimating both the parameters of
0:03:19bench
0:03:20fashion
0:03:21uh
0:03:23you
0:03:23that's circle forms
0:03:25um wheel
0:03:26and then
0:03:27the problem reduces to finding the
0:03:29pension that the way they they sell B
0:03:32with the random matrices
0:03:34then there is a B
0:03:36group of each D field described in not a T P to do you D field and the whole range
0:03:41a a cute with a um a round them finite sets
0:03:45this the
0:03:46um
0:03:47recently there was them um
0:03:49a uh uh also based on
0:03:52sequential monte carlo and
0:03:54um the that
0:03:56a of ways
0:03:57uh
0:03:59uh yeah the
0:04:00uh
0:04:01i wish shall i N
0:04:02of this so we
0:04:03been also work
0:04:05by combining sequential monte carlo with
0:04:08vol
0:04:08random graph
0:04:11then uh are the case for lunch
0:04:14um
0:04:15number of but that is within the group is especially challenging um not only because of the the large dimension
0:04:22but also because you can
0:04:24estimate uh each if you the that yet so the weight this problem soap usually used by forming a last
0:04:31uh and then estimating the
0:04:34to have the class that and to the then
0:04:37so um
0:04:38the
0:04:39extended object tracking
0:04:41then um
0:04:42the down to a a a a a and um
0:04:45uh formulation a way you want to know where the center use and the then
0:04:50and then you need to both um
0:04:52state estimation
0:04:54problem and parameter
0:04:56estimation problem
0:04:57if be could people speak up to the two problems because you know part to things are not so good
0:05:03in
0:05:04parameter estimation especially when they are and i mean
0:05:07so for what the in these groups
0:05:10including in the work of all one uh estimate the parameters with this that than approach and the ages
0:05:16all yeah
0:05:16we they of meditation and then feed the for and it is in the the part of you they'll or
0:05:22another field yeah
0:05:24and then there is another interesting group of approaches is combining in in row techniques with the common field to
0:05:31or or um i'm not that that bayesian need
0:05:34uh
0:05:36and uh a also was on type process models
0:05:40and and so on
0:05:42so what we do in i'll work
0:05:44we focus on
0:05:46i i mention now um estimation problem nonlinear linear in general described with um
0:05:52that the general uh state space equations where the
0:05:57a state function is nonlinear uh and the uh
0:05:59the noise can be a non gaussian in general
0:06:03then
0:06:03um we assume the markovian property just dependent on the previous that and the measurement equation and uh in general
0:06:11is nonlinear and the noise
0:06:13of process can be a a non girls
0:06:18so
0:06:19just briefly we we are still
0:06:21a the bayesian estimation problem finding that both your ears they be yet based on the the yeah so actually
0:06:28still being
0:06:29um
0:06:30you um
0:06:32you might a way the chapman or of equation
0:06:35based on a that the particles and they are T
0:06:38uh their weights
0:06:40and the the bayesian update
0:06:42i the more everybody's but media with that
0:06:46so with in this time there's a sequential monte carlo what we we do we follow the prediction and update
0:06:52that still the prediction we have
0:06:55our uh hmmm of the transition prior actually in
0:07:00in you know in case we use it
0:07:02and then we find um these the pdf function
0:07:06and where we
0:07:07spread particles do to the noise and and uh in the like the updates that way when the measurement a
0:07:14hmmm comes
0:07:16then we folk of the particles
0:07:18uh by combined with the likelihood
0:07:20and there is sampling that just introduce the very right
0:07:24oh now what are we doing um
0:07:27we
0:07:28move um
0:07:29but the cloud of particles
0:07:32uh with
0:07:33um
0:07:34markov chain monte carlo at um
0:07:37method
0:07:38so we can um in that you know one of these solutions exist in the literature um a one can
0:07:45use metropolis case
0:07:47uh it's that like he as the one can generate particles
0:07:50in time K minus one
0:07:52based uh
0:07:54from this propose to distribution
0:07:56and then the new particles
0:07:58um
0:07:59in a time kate a can be john in to me no way
0:08:03then we simulate a sample X pine prime from the
0:08:07a joint
0:08:08a a probability density function uh and then where i
0:08:12sprite drawn from
0:08:14these um
0:08:15transition prior
0:08:17and the previous
0:08:18expand K minus one is uniformly a drawn
0:08:22from from an imperial empirical or distribution
0:08:27so with in this question of the metropolis hastings algorithm one accept or reject
0:08:33the um
0:08:34a a new candidate there
0:08:36um
0:08:37when this condition is satisfied so you the you for me
0:08:41uh generated a random number these less
0:08:45or you you than i'll but
0:08:47i mean a
0:08:48and this likelihood ratio then we at said that yeah otherwise we we reject
0:08:54um this is um
0:08:56oh would algorithm but when the measurement when the state noise he's is relatively small
0:09:04then the moves can be rather small
0:09:07they are
0:09:08i there are uh improvements recent the a suggested by uh simon i'm and chords you and she's group where
0:09:16one can combine metropolis case things with the gibbs sampler
0:09:20and one thing see that there is a much better mixing scene
0:09:23uh especially for large group
0:09:27yeah i other algorithms like mcmc sampler as um i'm not of seven than group where one can generate a
0:09:34T V
0:09:36and
0:09:36what we do he's
0:09:38uh i'll show on the next slide
0:09:40we uh use
0:09:41subgradient gradient information of the likelihood you know to to move part goals in um
0:09:47more likely region
0:09:50so we
0:09:51we can have some some X
0:09:54i K prime
0:09:55um
0:09:56propagated to these
0:09:58joint pdf
0:09:59and then we calculate that that we see and
0:10:02so based on
0:10:04the logarithm of the likelihood function so these
0:10:07the
0:10:08is that what she subgradient norm to read
0:10:11uh
0:10:12from
0:10:13and the like who about the lady uh
0:10:16in in uh the part though X prime
0:10:18and we can have a relic station parameter which is
0:10:21um can be uniformly sampled or samples from a uniform distribution
0:10:26or or can be a adapt be chosen in in some way
0:10:31and i actually the means is the
0:10:34of performance but the out with them
0:10:38then
0:10:38we form the regularized
0:10:41uh proposal like a um a gaussian mixture and then um the metropolis that case since it's set and problem
0:10:49be set that's probability
0:10:51is formed based on
0:10:53a nice rue
0:10:55and then um we accept or reject
0:10:58um
0:10:59they are
0:11:01yeah um samples based
0:11:03uh on
0:11:06oh
0:11:07one can um
0:11:09yeah this proposed algorithm with that the large amounts of a um
0:11:15random walk outside of a markov chain monte carlo methods where
0:11:21um
0:11:22we one can achieve a similar effect but in our case the sterilisation and a can have negative values and
0:11:30also we
0:11:32we restrict the uh within that then develop between
0:11:35so you want to the R
0:11:37theoretical results are shown in um
0:11:40the anti sample agendas for convex
0:11:43uh log likelihood functions
0:11:47B
0:11:48uh performance of these sub gradient projection uh technique
0:11:53has been um
0:11:54valued at all where um
0:11:57well known example
0:11:59but uh with forty states and one handed state
0:12:04um
0:12:05and
0:12:05yeah yeah uh results
0:12:07um where we
0:12:09O calculate the normal i the average
0:12:12no estimation error actually this two-state minus
0:12:17um
0:12:18the estimate
0:12:19so now want to these uh different all the
0:12:22no two of the actual state
0:12:26um bases for
0:12:28um
0:12:28up to forty um
0:12:31state
0:12:32and he can see them there is then of these
0:12:35um
0:12:36averaged norm error
0:12:38um
0:12:39between the
0:12:41a subgradient
0:12:42projection
0:12:43markov chain monte carlo method then
0:12:46well we have the sampling importance resampling particle filter and then come on
0:12:52you
0:12:52the word
0:12:53for
0:12:55then
0:12:56um
0:12:57they
0:12:58what that that they are now we we focus the attention all of um
0:13:03the performance of this out with them when
0:13:05we have a lamb down
0:13:08a parameter alternating
0:13:11and
0:13:12uh as we can see uh one can achieve a meeting than the better performance compared with that
0:13:19the mcmc when
0:13:21these is a regularization parameter a use drawn from a uniform distribution
0:13:26and also compared to the spend common
0:13:29you
0:13:31also one can see that one had them much higher
0:13:35acceptance ratio ratio with the alternating in uh and C and C uh i'll with them from yet with the
0:13:43a when mom that
0:13:44drawn from the uniform distribution
0:13:49that's and that the
0:13:50uh it it in a case where but we have a unique sample this is actually the random multi variable
0:13:57random them walk model uh with hundred states and what we show that um
0:14:03the alternating mcmc um
0:14:07you it uh uh uh can reach
0:14:09a white the um
0:14:10i i Q is C which is comparable with them um and to that which is the optimal solution for
0:14:16that case
0:14:18a then
0:14:20i'm not a interesting problem is when the data is um
0:14:24sparse
0:14:25and you know um
0:14:28that they had been a lot of freeze that's uh used the in that area that that
0:14:33but um um
0:14:35rest and seen or compressive sensing
0:14:38has been point by the no hold in two thousand
0:14:41six
0:14:42a there are a lot of uh works real at uh with the linear case
0:14:47um
0:14:48it is by noble also because um
0:14:53if you use that we um
0:14:55which works when we we have a limited amount of data so we know um the shannon sampling theorem is
0:15:02eighteen that
0:15:03a we can we of a a a signal completely eat
0:15:07the the sampling frequency is that be go twice bigger than the maximum frequency of a signal
0:15:13a web but you the years that
0:15:15that
0:15:16a possible if if this condition is by lady uh we can recover cover of there is sparse signal
0:15:23thanks to uh
0:15:25the compressed sensing a a theoretical derivations
0:15:28uh the problem boils down to an optimized a which are initially was for uh normal eight
0:15:35uh a L zero which is a non polynomial caught problem but then you was
0:15:41a for the of um
0:15:43as
0:15:44an optimized patient problem and a minimum of minimization of known that one
0:15:49we want to recover the signal X
0:15:51one
0:15:51palms
0:15:52when we have mentioned a measurement vector which they mention use much smaller than the dimension of that the state
0:15:59vector
0:16:00and
0:16:01uh
0:16:02this is possible
0:16:03if two conditions that are satisfied one is sparsity
0:16:07so X has
0:16:09enough nonzero components S
0:16:11he's
0:16:12a measure and then
0:16:14the the second one is
0:16:15not in common jen at um
0:16:18then
0:16:19H
0:16:20the matrix
0:16:20H head
0:16:22uh its columns
0:16:23subset
0:16:24of size let into S
0:16:27and
0:16:28we compare
0:16:30uh the key
0:16:31performance of the step gradient projection markov chain monte carlo met that
0:16:37with a recently developed um
0:16:39compressive sampling a and you yeah that
0:16:42by a uh
0:16:45would feel and and it's the
0:16:47um
0:16:48and
0:16:49um
0:16:50actually the problem reduces sees uh to to the minimization yeah of these norm wet such that the
0:16:57a mathematical expectation of norm to of the air are used um
0:17:02bound V by a that the number
0:17:07so what is the problems here one of the difficulties he's when the signal response this effect gives the ability
0:17:13of the system
0:17:15and
0:17:15one might
0:17:16have
0:17:18yeah
0:17:20on the the bill many of the state
0:17:23uh
0:17:25in the example we can see the hundred states but to the observations but
0:17:30uh times that and uh the
0:17:33hmmm and to
0:17:34and and G
0:17:36the regular common and the cannot work in these conditions maybe maybe because of the lack of of the ability
0:17:43and uh
0:17:45let's see what find
0:17:46results we have
0:17:48so this is the signal
0:17:51um
0:17:52we
0:17:53bars
0:17:54that
0:17:57and
0:17:57on the right can said this is a measure of complexity um
0:18:02but a me is
0:18:03a perfectly is boss where is
0:18:05but
0:18:06a symbol
0:18:08and
0:18:09he of these that different real i realise this is with um
0:18:13more noise
0:18:15then
0:18:16um
0:18:16for the example and unique example with
0:18:19and hundred states
0:18:20uh
0:18:22this is the result we get this is there common field to
0:18:25this is the compressed C uh sensing of common you the
0:18:30uh and it seems that the
0:18:33step gradient projection a markov chain monte carlo method
0:18:37has a performance of close to
0:18:40to the compressive sampling common field to which is the optimal one
0:18:45when the of the of the B conditions are not fully
0:18:49um
0:18:51respect
0:18:53oh and let me conclude this torque so um
0:18:57this work keys
0:18:58uh propose used a new a markov chain monte carlo map that
0:19:02at that
0:19:03uh in
0:19:04at the performance of uh a sequential monte the colour
0:19:08you this
0:19:09by moving this time in into more highly uh more likely regions
0:19:14base
0:19:15or on uh a subgradient projection need
0:19:18and well we we compare it um
0:19:22we the several uh the uh well known field as
0:19:25so
0:19:26in this work we actually a propose proposed a a uh proposal
0:19:31propose a function
0:19:33uh a i accuracy is achieved
0:19:37and in future we would like to to look at more complex examples
0:19:42um more related to to group and that
0:19:45then then the direct tracking
0:19:53i don't known to so
0:20:03yes
0:20:04oh
0:20:05course of to
0:20:08that i is some
0:20:11yes
0:20:13it's
0:20:14hmmm
0:20:17oh
0:20:18well i think so it is because when you use the gradient you push than in in a
0:20:25you know but the direction
0:20:27and then uh
0:20:29you improve the accuracy that way
0:20:34otherwise to
0:20:36there might be a lot of
0:20:38the
0:20:38sampled that that deeply T
0:20:44so we'll should
0:20:50i was to children should should to demonstrate a very good question that if there are not a in the
0:20:56paper of but um
0:20:58ah
0:20:59it is of use a switch or yeah
0:21:04do
0:21:05oops
0:21:06or two
0:21:08i
0:21:09is
0:21:11so
0:21:13so
0:21:16oh
0:21:17no
0:21:20sure
0:21:21useful
0:21:22as
0:21:24i
0:21:25i
0:21:25just
0:21:26see
0:21:28i
0:21:28which are usually the propose a small training all the proposal
0:21:33true
0:21:35yeah
0:21:36but it will be a related to also with the application
0:21:40speced that maybe be force that the group of method
0:21:42one type of proposed a we work better
0:21:45the yeah
0:21:48Q
0:21:50i