a distributed gaussian particle filtering using a like lead consensus and is the joint work with
the on is lou checked friends how much but there your each and
um are
so uh first let me summarise the
uh a contribution of this paper
we proposed a a a a distributed implementation of
of the gaussian particle filter that was originally introduced in
um
in a centralized for mean in the paper of co tech uh and you reach in two and
and three
so in this paper uh we we've posted the distributed implementation and in this implementation each uh
sensor computes i
global estimate based on to joint or all sensors uh likelihood function
and and
the joint likelihood function uh or its approximation is obtained uh
at each sensor in a distributed way using a the like likelihood consensus scheme which we propose to
in our previous paper two thousand ten
at the T asilomar conference
uh here we also use a a second stage of consensus algorithms
uh to reduce the complexity of the of the distributed gaussian
a particle filter
so a a is a brief comparison with some other consensus based distributed particle filters
so in this paper
in in far a lot uh two doesn't ten uh the use no approximations and so the
do estimation performance can be
better but on the other hand the communication requirements so can be much higher than it
in our case and in
uh uh two to and eight uh the use on a local like load functions
and in contrast to the use the joint likelihood function at each sensor
and
so the estimation performance of of our method is is better
um
okay so let's start with some
overview of distributed estimation wireless the sensor network
so what we consider is a wireless sensor network that's composed of capital K U
sensor nodes that joint we estimate a time varying state X and
and each of the sensors
indexed by
small K
uh obtains the measurement vector
is E
and
an example is so local aviation and
or tracking based on the sound emitted by a moving target or we can
and the cost that we would like to achieve are to forming so
it sensor nodes should obtain a global
state estimate
X head
and based on measurements of all other sensors and the network and this might be important for example in sensor
actuator or
robotic networks
and would like to use only local processing short distance communications with neighbours
and also no fusion center should be used and no routing of measurements throughout the network
okay we also wish to perform sequential estimation
uh of the time-varying state
X and
uh from the current and the past measurements of all sensors in the
in the sensor network
and and
so we consider nonlinear non gaussian state space model but with
independent additive gaussian measurement noise is
and it's uh such a system is described by the state transition pdf
and the joint likelihood function or J lf
where C N is uh
is the collection of measurements from all sensors
and
well in this case optimal bayesian estimation amounts to calculation of the posterior
uh pdf here
and sequential estimation is enabled by is uh a recursive posterior update they where we
turn the
previous post your to the current one
using the state transition pdf and and the don't by clint function
and joint like that function is important if you want to obtain
global results based on to
all sensors
measurements
okay so now let's have a look at the distributed gaussian particle filter
so it it's well known that for nonlinear non gaussian systems the optimal bayesian estimation is typically infeasible
and the computational feasible approximation is provided by a
particle filtering for
well sequential monte carlo approach
and think an example of
many one of the many particle filters is the the gaussian particle filter proposed in this paper
where the posterior is approximated by a gaussian pdf and the mean and covariance of this
gaussian approximation R
oh obtained from a set of
weighted samples or particles
and what we propose is a distributed
implementation of the
gaussian particle filter where each sensor
use
local
gaussian in particle filter to sequential track the mean and covariance of a local gaussian approximation but that the global
posterior
uh
and
in this case the the measurement update
at each sensor uses the global joint likelihood function and which ensures that global estimates are obtained
and it's end
and the J laugh is provided to to each sensor in a distributed way using the likelihood consensus
scheme that we proposed in in this paper
and some advantages are that the consensus algorithms employed by like that consensus require only local communications and operate without
putting protocols
and also no measurements or particles need to be exchange between the sensors
um so here are the i'll show some steps that each
sensor performs oh so the steps of a local
gaussian particle filter
so first couple at time and it's sends or obtains the gaussian approximation to the previous global posterior
then eight
draws
particles from this
a a gaussian approximation and it propagates so through the state this model so
basically it's samples new predicted particles from the state transition pdf
a then we need to calculate the joint likelihood function
at each sensor and to do this we used the likelihood consensus
and this step we will require communication between the
uh neighbouring sensors
and
after each sensor can update the particle weights using the
obtained trying to like lead function so
this is how it's done
so basically be we then evaluate the joint like with functions at the
but in like look function at the predicted particles so
that's why we need the joint likelihood function at each sensor
as a function of the
of the state X and four point twice evaluation
and once we have to particles and weights we can calculate again to meeting
and covariance of the of the
a gaussian approximation to the global posterior
and
the state estimate is basically equal to disk calculated the in here
so now let's have a look at how the like that consensus scheme operate
so
we we in this paper we consider the following measurement model we have here uh
measurement function H and K of X N which is
in general nonlinear it's it's a function of the
of of the sensor
index and
uh uh it it depends on the sensor and possibly also on time
and fees
additive uh
gaussian measurement noise which is
assumed to be independent from sensor to sensor
and you to this we obtain the joint like that function as a product of local likelihood functions
and therefore in the exponent of the joint likelihood
we have a sum over all sensors so this is this expression as and
a here
and for purposes of statistical inference is
S an expression completely describes the joint like lead functions will focus on a distributed calculation of of S N
and it will be
that's three to obtain this as a function of the state X N
and C N is just a collection of measurements from all sensors and it's observed and hence fixed
well
a direct calculation of of S and wood
required at each sensor knows the
measurements and also measurement functions of full other sensors in the network
but uh initial we assume that
each sensor only has its
local information so we would need to somehow
root this local information from each sensor to have every other sensor but that
so what we would like to a it so we
we choose another approach will be suitably approximate S N
by
suitably approximating the sensor measurement functions locally
and to do the approximation in such a way that we can use than consensus algorithms to compute S N
a
so
here we use a
polynomial approximation of the sensor measurement functions so which till is the polynomial approximation
uh and the
this function here P R of X and basically this is are the
the monomials all meals of of the polynomial but in principle we could use other basis functions to obtain
some more general approximation
and the the coefficients all five of this approximation there we calculate them using a least squares polynomial fitting and
as the data points for this we squares fit be use the predicted particles of the
of of the particle filter
and that's
important note that the
rocks summation so basically the
alpha coefficient of the approximation error obtained locally at it sensor so
we don't need to communicate anything to to do that
now if we have a substitute the
polynomial approximation H two the for for H in in this S expression we obtain
and approximate
S still that
uh since
H till this are
polynomials basically out of this um
overall all sensors we obtain also a polynomial but of twice the degree so
what we write this
we see her the polynomial
uh
you
coefficients the beta coefficients they contain for each sensor
all local information so it's measurement
as well as the U
alpha coefficients of the approximation of of fits a local
uh a measurement function
uh what's important is that the coefficients are independent of the state X N
and the only
way how the state and into this expression is that would these monomials or
some general basis function
and now if we exchange the order of summation here so we we get a uh
polynomial which has
coefficients T
and this coefficients here
there are obtained as a sum over all sensors
and therefore for the
these coefficients they contain information from the entire network
so we could view them
as the sufficient statistic that fully describe
is that still that
and in turn also the approximate joint likelihood function
so we see this is the approximate joint likelihood if each sensor knows these coefficients T
then it can evaluate the
joint likelihood function for more less for any any value of of the state X N
a so since this coefficients are obtained as already said
uh as a summation over all sensors state can be computed using the a distributed consensus algorithm at at each
sensor
so this is basically how would operate it check it's sensor computes locally coefficients speech of from the local available
data
and then the sum over all sensors is computed in a distributed by using consensus
and it requires only transmission of some
partial sums to the next per so we don't in to transmit measurements or or or or particles
a the communication load put therefore be much much lower
okay okay i'll just briefly mention
ah
a reduced complexity person of the distributed gaussian particle filter
a a so in in this
reduced complexity version each each of the
"'kay" set uh sensors
or
"'kay" local in particle filters
uses a reduced number of particles cheap prime
so we we use the number of particles by a factor put to the number of sensors
and we calculate a partial mean and the partial covariance variance of the global posterior but also using the joint
like with function of the using the like with sensors
and
after this partial means and covariances can be
combine by means of the second stage of consensus algorithms
and
if the second stage
use a sufficient
number of iterations then the pitch estimation performance
of the reduced complexity version will be effectively put to that of the original one so
we
reduce the computational complexity but of course we introduce some new communications so it comes at the cost of some
increasing in communications
okay now i'll show you
a target tracking sample and some simulation results
so
oh in this example the state
represents the two D position and the two D velocity of the target
and it it false according to this state transition equation
uh
and we consider or we simulate a
network of randomly deployed acoustic amplitude sensors that sounds the sound i mean that sense the sound i meet it
by to target
and
the measurement model is
the following so the
sensor measurement function is basically given here
so we have the amplitude of the
of the source divided by the distance between the
target
and the sensor
and it's in principle the sensor positions can be time varying so we could
the plight this mess the also the dynamic uh a sensor networks
a this is the setting so we deployed sensors in the field of
do mention two hundred by two very meters and
it consists of twenty five acoustic and sensors
um
and the proposed distributed gaussian particle filter and it's reduced complexity person now compared with a centralized gaussian in particle
filter
we used one thousand particles sense to approximate the measurement function we use a polynomial of degree
to
which leads to fourteen consensus algorithms that need to be executed in parallel so
basically a what in one iteration of like consensus you need to to transmit fourteen real numbers
and we compare like that consensus that use eight iterations of consensus
with a
with a case where we calculate the sums
exactly so that that could be
a that's a
as an S the asymptotic case
so infinite number of consensus iterations more less
okay okay here just as an illustration we see that the green line is the true target trajectory and the
the right one is the track one
and it's just a result
a from one of these sense but in principle all sensors obtain the same reason
okay here is to root mean square error performance of first this time
ah
the black line is the centralized case and as expected this the best one
now if you look at the distributed case the exact some calculation
that's the red plan
there is a slight performance degradation and
of course if you only use eight iterations of consensus you you get the to line which has
slightly worse performance again but even
we compare
the blue and red two
to the to the black ones with to the centralized case the the performance degradation is not so large
here it's
average average rmse which we averaged also over over the time and versus just measurement noise variance so yeah the
noise variance rises is also the
error arise is but more less the comparison between the three mats this the same as something
on the first figure
here it's the dependence of the estimation error on the number of consensus iterations and yeah of course as the
number of iterations increases the performance gets better
but what's interesting is the
when we compared to the
solid
ooh curve with the solid red once for the
strip it gaussian part before and it's reduced complexity version
for
lower number of
iterations here the the reduced used complexity version
uh
has a slightly better performance and this we could explain
more or less that
such a way that the second stage of consensus algorithms helps to diffuse for that a local information
throughout the network
okay okay so what's conclude we proposed to distributed it uh that was can particle filtering scheme that
in which each sensor around a local gaussian particle for to that computes a global state estimate that reflects the
measurements of all sensors
and to do this
the
we have to the particle weights at it's sensor using the joint like good function which we obtained in a
distributed way
i likelihood consensus
and
a think about like let can is that it requires on only local communications of
some sufficient statistics so no measurements or particles need to be communicated and
is also suitable for dynamic sensor networks
and we also propose a reduced complexity variant of the distributed option particle filter
and the simulation results indicate that the performance is good even in comparison with the centralized a in particle filter
okay so that's compose my talk thanks
i
i
i
i
yeah
should
is uh are insensitive to K to the value issue
a a take a static um a couple of lot the number of polynomials right
uh
yeah that's the order of the problem a yes i and what that this approximation
is good for K
you mean all sensors yes
uh
yeah i mean in this in this application we use the same same
type of measurement function at each sensor
so
that's what we used also the same approximation for for all sensors
but i mean in principle you could have
different measurement functions that different sensors and then you would need to
use different order of polynomials and yeah
yeah
i
and
say
a my in the same manner value
of the global one
a function
well i mean you can only guaranteed by using a
yeah
a i think you cannot guarantee these i mean it depends on the on the size of your network and
the bigger the network the more iterations you need i mean so
a
hmmm
uh yes
yes i on there are slight differences i mean depend on the number of iteration i mean you can
oh
hmmm no actually in in in the gaussian particle for to you don't need any a resampling because you construct
the gaussian posterior and then you sampled new
but yes i mean if you have insufficient number of iterations then each because each of the also operates separately
so it go
each of the nose has a
he's all its own set of particles and its own set of weights and it will there be slight difference
yeah
and
yeah
and
oh
lee
yes
that's one
i
well
yes
i
yes
uh
no no it's it's not not the case uh i mean
uh it's just what
you saying
and
as
yeah okay
yeah