0:00:14 | so my name is the current crush them more the and i'm presenting the top michael orders uh |
---|---|

0:00:19 | my graduate student was stuff a and also line white to in the audience you lying is from the you |

0:00:24 | is your badly |

0:00:26 | uh so just to kind of give you a pictorial |

0:00:29 | description of what the problem is it's kind of nice to start with something artistic so uh |

0:00:34 | this is a main idea |

0:00:36 | we are looking at |

0:00:38 | eight basic scenario where you're trying to track a moving target so |

0:00:54 | signal processing that B |

0:00:56 | the silly in papers where what you do is you do some sort of bayesian estimate of the state of |

0:01:01 | the top given noisy measurements |

0:01:03 | uh and of kind of written that abstractly |

0:01:16 | once you have those track estimates |

0:01:18 | typically you have some he wouldn't being looking at a |

0:01:21 | television screen and saying okay this it |

0:01:24 | is doing something interesting |

0:01:26 | well this start it is not doing something interest |

0:01:29 | what we wanna do is all to mate that high level |

0:01:32 | that's why we call this metalevel tracking so |

0:01:34 | in other words |

0:01:46 | no i i had to have a project a a a a a thing |

0:01:49 | the red button you |

0:01:50 | so the idea is given these dot |

0:01:54 | how do you joint the essential |

0:01:57 | can you tell from these dots what sort trajectory the talk it's gonna do |

0:02:01 | and |

0:02:02 | do this in a page and sense to some sort of non than you filter |

0:02:05 | estimate the trajectory |

0:02:07 | so this is to automate |

0:02:09 | several possible toss which are of interest |

0:02:11 | for example you might have a target |

0:02:14 | which you interested in which goes to a check point of fills up fuel goes to another point and then |

0:02:19 | a final this |

0:02:21 | um are only interested in targets which go to this trajectory go to those but specific point |

0:02:27 | how can i we construct |

0:02:28 | an estimate of such a project |

0:02:30 | you know patience and |

0:02:32 | so this kind of bills on a a legacy tracker |

0:02:36 | and a goal is to see how could be construct good models to this |

0:02:40 | and once you do construct a good model how can you actually can L |

0:02:45 | so uh just to give you a further example of this this is a typical example which kind of motivated |

0:02:50 | by we did some of this research |

0:02:52 | so this could be maybe somewhere in a city where a |

0:02:57 | typically traffic |

0:02:59 | road traffic goes like this |

0:03:00 | donna role |

0:03:02 | and then some day |

0:03:04 | do you to maybe the local people knowing that i have something was deployed in the role maybe and i |

0:03:08 | i D use something |

0:03:10 | you see a lot of traffic |

0:03:12 | by pass a particle part of the road |

0:03:15 | so probably a track see all the dots done with people by past the that |

0:03:19 | so previous so you're going the straight line |

0:03:21 | now you kind of died reading and then going back |

0:03:25 | how could you actually construct some like to classify for that |

0:03:29 | given given the do |

0:03:32 | so all an obvious way of more like this would be that |

0:03:35 | in some sense i'm deviating |

0:03:37 | some amount |

0:03:38 | X |

0:03:38 | from the road |

0:03:40 | and then a coming back but a same amount |

0:03:42 | now that a more X could be variable a baby got all the way here are then going back |

0:03:46 | so we need a model which is scale invariant |

0:03:48 | which allows you to go in a particular dimension probably a random number of step |

0:03:53 | and then you need to remember that that a number of steps to come back and we model construct a |

0:03:57 | model which does that |

0:03:59 | it's kind of obvious that a standard finite state markov chain cannot do that |

0:04:03 | is a markov chain cannot embed them E which is |

0:04:06 | variable in it |

0:04:07 | i mean of course you can construct a state space which is a maximal possible size |

0:04:11 | that's ridiculous because a of possible that's very here |

0:04:13 | so it sort of possible use |

0:04:15 | station |

0:04:15 | so we need something smarter than a markov chain |

0:04:18 | so uh the that's that's basically the idea of what we gonna talk about |

0:04:21 | so you can think from an abstract point of view this is a special case of a trajectory which we |

0:04:26 | call it a |

0:04:27 | where you deviation away from it is equal to the deviation to words that |

0:04:31 | and this D V A should could be around random |

0:04:33 | what to do |

0:04:34 | so could be anything |

0:04:35 | another example could be top which just circling the building so for example a to be something suspicious build themselves |

0:04:41 | up or whatever |

0:04:42 | so |

0:04:43 | can you tell from a bunch of don't |

0:04:45 | isolate only those trajectories |

0:04:48 | where talk it's something a building |

0:04:49 | for example going a rectangle |

0:04:52 | uh a or for example maybe be power just going straight fine to be known |

0:04:55 | so how do you isolate different trajectories you using some sort of nonlinear filtering operation um |

0:05:04 | okay so i just just a kind of again reinforce the idea that |

0:05:08 | in conventional talk tracking which i guess to lots of papers and signal processing it's so one |

0:05:12 | the main idea you construct a steak space model for the trajectory of the top |

0:05:17 | that's fantastic if you're doing very short ranges over time because you can approximate things by random walks |

0:05:24 | but over long periods of time the drivers not a random block |

0:05:27 | he has a well defined destination |

0:05:29 | and they typically going through for example |

0:05:32 | a street man |

0:05:33 | a digital that for example which you could we construct google |

0:05:36 | or whatever |

0:05:37 | so how do you construct |

0:05:39 | an efficient model which can do that and then the questions how can you estimate that |

0:05:43 | so the main idea the talk is we we dealing with a particular class of models which are called stochastic |

0:05:48 | context free grammars |

0:05:49 | and maybe is useful to start with this picture the border |

0:05:52 | so if you if you did a elementary course and radical computer science |

0:05:55 | you would see that a markov chain |

0:05:58 | or for like to go to state space models |

0:06:00 | are |

0:06:00 | regular grammars |

0:06:02 | they assume that what happens now depends probabilistic lee |

0:06:06 | on what happened at the previous time point |

0:06:09 | it's not true would most complicated trajectories because if you're gonna go in a square |

0:06:14 | you have a long range dependency because you're starting and ending point of the simple |

0:06:18 | that's not my |

0:06:19 | so |

0:06:20 | in in computer science there is this more general model call eight context free grammar |

0:06:25 | which is a level of abstraction we gonna talk about today |

0:06:28 | where |

0:06:29 | you can actually construct |

0:06:31 | several very interesting |

0:06:33 | trajectories |

0:06:34 | which are non that of you |

0:06:35 | such as rectangles all since a |

0:06:38 | a but size which but you can prove |

0:06:40 | a markov chain cannot |

0:06:42 | we can write and talk about the |

0:06:45 | now of course there are even more general language |

0:06:47 | such as context-sensitive grammars |

0:06:49 | but there are no use to was because they do not have all real time out |

0:06:53 | some things not all of the you know it's not |

0:06:56 | from an engineering point of view |

0:06:57 | useful and that there are restricted was just like one i'm talking about |

0:07:01 | it's kind of a while english so speak |

0:07:03 | so typically regular is |

0:07:06 | mark or V a know the sort of stuff everybody does |

0:07:08 | what we're trying to do here something a slight generalisation of the context free |

0:07:12 | me |

0:07:14 | so this is called a chomsky hierarchy a formal language that and our goal is to see can one constructs |

0:07:19 | the equivalent filters such as a kalman filter or the hidden markov model filter so one |

0:07:24 | in this domain where you have long range dependence |

0:07:27 | okay not a can be a little bit more insight |

0:07:30 | now you might say why is this of any use and i want give you some more insight as to |

0:07:34 | why this is of use |

0:07:36 | for starts this is a very convenient model for human operators |

0:07:40 | you can imagine that |

0:07:42 | in ideally only what happens is off to the tracker operates and as a should in the first slide we |

0:07:47 | trying to build something on top of that |

0:07:49 | which helps to human |

0:07:51 | automate to the process of detecting if something doing |

0:07:54 | something interest |

0:07:55 | now it's pretty hard to teach the average soldier probability theory and detection it's so on and make them understand |

0:08:01 | how to construct a bayesian estimate |

0:08:03 | if you can automate that process by allowing the average soldier just to estimate |

0:08:08 | to put in |

0:08:09 | uh_huh |

0:08:10 | metalevel is that if a target is doing something it's a species |

0:08:13 | if is not doing something not speech |

0:08:15 | so essentially actually you can quantify these rules |

0:08:19 | in two |

0:08:19 | fairly interesting types |

0:08:21 | of of production rules what you call be size |

0:08:24 | and you can check the consistency of those by a compiler which which are long describe here but it's pretty |

0:08:28 | easy to do |

0:08:29 | second point is once you do that |

0:08:32 | just like a regular grammars in the context free grammar domain main you have |

0:08:36 | polynomial out |

0:08:38 | a estimation |

0:08:39 | a lot of this work is really been motivated by recent research should by all informatics |

0:08:44 | where you also have long range dependencies |

0:08:46 | and i want to give you some insight as to what i mean |

0:08:48 | so this is a typical buyer informatics type example this is |

0:08:51 | a protein which coils |

0:08:54 | or a given thing D N A which is a you can declare nuclear tag sequence which also close like |

0:08:58 | this now the dependencies you're a spatial so this guy here depends a lot of disk Q even though it's |

0:09:03 | pretty far we you will always call |

0:09:06 | rather than have a linear depends |

0:09:08 | okay so mark of in type processes can not more the sort of dependency with this guy |

0:09:12 | is spatially very close to you |

0:09:14 | long |

0:09:16 | another example |

0:09:17 | think of a computer language |

0:09:18 | this is a a complete or experiment it's not it's not a actual useful thing but it's it's a nice |

0:09:22 | thought experiment |

0:09:23 | suppose to take |

0:09:24 | something like a computer language like C your back to that |

0:09:28 | have a big in if |

0:09:30 | and then i have a lot of stuff and then at the end |

0:09:32 | if a computer language was marked for the and then at have a a and if |

0:09:35 | with a high probability you be B up for beginners |

0:09:38 | although an actual computer language you have a lot of stuff |

0:09:41 | so now you can think of the core experiment |

0:09:42 | suppose i think this computer language |

0:09:45 | this code and a corrupted by noise |

0:09:47 | how that we can |

0:09:48 | now i can't use a markov chain because this is not |

0:09:52 | representation |

0:09:53 | so that's another example |

0:09:55 | i |

0:09:56 | so the ball is how can you construct a useful things here |

0:09:59 | and the point is |

0:10:00 | they should be scale invariant you C |

0:10:02 | all of these are the structure it scale in it's i have a big in if i can have a |

0:10:06 | lot of junk could between and nine is but me and |

0:10:08 | so the size of how much stuff is in between |

0:10:11 | should not matter for the context of the a |

0:10:14 | so that's that's something which we want as well |

0:10:16 | now it turns out that |

0:10:17 | for such that the models people are shown i'm not to give you any right studies here |

0:10:22 | that |

0:10:22 | in terms of entropy |

0:10:24 | these context free grammars |

0:10:26 | do fantastic stick that is |

0:10:28 | for a equal size parameterization of a model |

0:10:32 | if we look at the |

0:10:33 | predicted entropy |

0:10:35 | these are extremely efficient models when you have to sort of scale and |

0:10:39 | okay so let's not jump in |

0:10:41 | the actual types of trajectories we want talk about |

0:10:44 | so this is the first trajectory a random walk which is simply |

0:10:48 | a good role of chain which you do was state space |

0:10:51 | this is a widely used |

0:10:53 | it target tracking but as i said it's pretty useless because no drivers a drunk or simply flips a point |

0:10:57 | it is that's but gonna drive |

0:10:59 | they got go in a directed we from a to B |

0:11:01 | a much more sophisticated model which michael a line white is that's a pretty cool work alone |

0:11:05 | is very you know the beginning and you don't the and |

0:11:09 | and you want an so it's called the reciprocal process are a lot of you bridge which is a generalisation |

0:11:13 | of a brownian bridge so you clap the two and but you start this |

0:11:18 | definition as random pass |

0:11:19 | and then you construct |

0:11:21 | a dependency but each point now depends so the next point at the previous one |

0:11:24 | to this is a one dimensional model round fee |

0:11:27 | and that's called them up with you |

0:11:29 | now you can think of a much more generalisation of that we is a stochastic context free grammar |

0:11:33 | so rather than have |

0:11:35 | i start from a a |

0:11:36 | and i'm restrict to reach time be at a fixed future time |

0:11:41 | i now want to go through particular weight points these could be for example a ones |

0:11:45 | so i one like topic to go from a to C to be |

0:11:49 | and in between to you i'll i'll a we project |

0:11:53 | how can a now if uh |

0:11:55 | given this sort of model |

0:11:56 | construct from noisy estimates |

0:11:58 | the actual trajectory that would be a question once |

0:12:01 | so here are examples of cases where you can not fit simple markov change |

0:12:06 | so i the said one of them is an out |

0:12:08 | so if i goal in this direction a four and point |

0:12:12 | and then and and the tree number of points to this direction B |

0:12:15 | and then come back |

0:12:16 | see for and points |

0:12:18 | you can prove |

0:12:19 | by something colour pumping level like computer signs that |

0:12:21 | such a string |

0:12:23 | when you have and As |

0:12:25 | a bunch of bees and then and sees were and is a battery |

0:12:28 | can not exclusive we generated by a markov process it's a possible you can do that you can actually she |

0:12:33 | for like four |

0:12:34 | so a rectangle |

0:12:36 | so suppose i when equal number of points year then here than equal number of points and the backward |

0:12:41 | such a close trajectory can not be generate by |

0:12:44 | the intuition is that a markov chain cannot store memory in |

0:12:47 | it's got a markov chain is simply a regular automata |

0:12:50 | to store memory you need something called push down stack you have to store stuff and you put that out |

0:12:55 | of the stack |

0:12:56 | and that's a lot of |

0:12:58 | so anyway so these are kind of |

0:13:00 | but a things i don't just one a very quickly give you a a uh some inside |

0:13:05 | as to how this differs from a standard construction of a |

0:13:08 | hidden markov model which is |

0:13:10 | what you typically do |

0:13:12 | when you deal that are tracked |

0:13:13 | a little |

0:13:15 | so most if you would know what a hidden markov model or a probabilistic model of them are uh functional |

0:13:19 | markov chain |

0:13:21 | we start with the transition matrix so this is an underlying markov chain |

0:13:25 | which of evolves over time |

0:13:27 | and we observe the markov chain |

0:13:29 | it also to alphabet you a and B |

0:13:31 | well these are the probabilities of the observation a given state one |

0:13:35 | these are the probabilities of the observation a given state doing so so this is you parameterize it to do |

0:13:41 | um now |

0:13:42 | you can as right this in in the type of rules we wanted to do to express a context free |

0:13:46 | grammar and the next page |

0:13:47 | by simply saying that we have a bunch of terminals which uh what you'll to |

0:13:52 | and a bunch of non terminals which are state |

0:13:55 | okay |

0:13:55 | and essentially actually you can qualify all these in to rules that if i scott |

0:13:59 | from some starting state S one |

0:14:01 | then i generate |

0:14:02 | a a and remain S one it probably point five for and that's pretty clear is the i-th eight point |

0:14:06 | nine mike transition matrix of boy from S one to S one |

0:14:09 | and not blah but the probably a generate eight point nine ten point six point five and i can do |

0:14:13 | this for all possibilities and this is be the production rules |

0:14:16 | the drama which can generate any string which is mark of view |

0:14:20 | this is how a re |

0:14:22 | and then i simply start for example a S one |

0:14:25 | i use a real i S one piece me a S two with some probably point to just my simulation |

0:14:29 | i keep generating and what you see here is the markov chain grows |

0:14:33 | on a bayesian network from left to right |

0:14:36 | dependencies is just |

0:14:37 | one |

0:14:38 | this is trivial this is well known |

0:14:40 | now we let me give you some insight to the stochastic to three gram |

0:14:43 | so here are the same rules as what we and the previous page |

0:14:46 | the only difference |

0:14:48 | is this last rule |

0:14:49 | what you see here is that we have |

0:14:52 | a not want term at |

0:14:53 | a state if you like |

0:14:55 | which maps |

0:14:56 | not just to another stadium but |

0:14:58 | a bunch of state |

0:15:00 | and this is the key difference you see when you have a context free grammar the chomsky |

0:15:05 | sort of hierarchy of languages tells you that |

0:15:07 | any time you map from a nonterminal on one side |

0:15:10 | to a terminal and |

0:15:12 | multiple non terminals |

0:15:14 | such as a language the string can not be model by a markov G |

0:15:19 | now this is |

0:15:20 | oh wait so this ground or would generate and we compute a language you know such as C |

0:15:25 | for of whatever the all examples |

0:15:27 | of things we set for the strong |

0:15:29 | of course this is a probabilistic version because we log probabilities but things |

0:15:33 | the big a generalisation of course which we want consider here is context sensitive |

0:15:37 | in context sensitive you have a |

0:15:39 | eight |

0:15:40 | terminal followed by a non little which maps of the right and say that would be the context |

0:15:44 | this don't to |

0:15:46 | anyway so this is how you would generate |

0:15:48 | using these rules uh a stochastic context free grammars string |

0:15:51 | and what you know here is |

0:15:53 | the recursive embedding it doesn't just left to right but it grows inside out |

0:15:57 | so it other words |

0:15:58 | you see this a and this at |

0:16:01 | get further and further away and yet they depend on each other |

0:16:04 | C is long range dependence |

0:16:06 | which which can a picture |

0:16:09 | this is |

0:16:09 | in very simple terms |

0:16:11 | how you generate a stochastic context free grammar |

0:16:13 | now to some mathematics |

0:16:14 | so everything here so far been pictorial next one X three |

0:16:17 | so |

0:16:18 | how would you that he model this in terms of and |

0:16:20 | or stochastic process well these are actually special types of branching process |

0:16:25 | that multi type gal and what's and branching process |

0:16:28 | where the order of the caff the bottom process better |

0:16:32 | so you can view this is saying roughly speaking that each realise a should is a |

0:16:36 | three |

0:16:37 | or a patient at |

0:16:38 | so each realise asian is a patient |

0:16:40 | so every time we realise it |

0:16:42 | you have different patient |

0:16:44 | just like for |

0:16:45 | state space models you have a realisation T this case the call chomsky don't forms and so one |

0:16:50 | and to prove that i have a string and it does not belong to a particular language |

0:16:55 | there are well defined rooms which are called pumping that |

0:16:58 | which can like to show that |

0:17:00 | okay now i i guess wanna give you some more insight as a how you can process these sort of |

0:17:04 | strings |

0:17:04 | so i have a spring i have a bunch of observations which are noisy i want it's a is this |

0:17:09 | target gotta go has gone in a straight line has to go on and then all has gone in a |

0:17:13 | circle |

0:17:15 | these a context free grammar type things which cannot be recognized by a markov chain |

0:17:18 | what sort of out sums of it'll |

0:17:20 | well you know the are or in case |

0:17:22 | there are standard filters like a hidden markov model filter which is analogous to a kalman filter |

0:17:27 | if we want to do so a poster sequence estimate the the viterbi out |

0:17:31 | it's so |

0:17:32 | i you want to estimate the parameters that things like the so like to estimation by the expectation maximization |

0:17:38 | it turns out |

0:17:38 | all these things generalized to the stochastic context free grammar to me |

0:17:42 | so just like you have a forward-backward filter |

0:17:44 | in a stochastic context free grammar because you of on a parse tree |

0:17:48 | you have something called the inside outside help with |

0:17:52 | just like you have the viterbi algorithm in the stochastic context free grammar case you have you do the cook |

0:17:57 | younger "'cause" sound out the model only two parts |

0:18:00 | which which do |

0:18:02 | so |

0:18:02 | essentially what you have done is you of generalized the role or the evolution of a random process |

0:18:08 | on a straight line |

0:18:10 | to something which of balls on a tree now and that's that's basically that the i Q where eventually the |

0:18:15 | tree has long which dependencies because it start from a common |

0:18:18 | okay so that's uh |

0:18:20 | get get so this is a then your hmm the the |

0:18:23 | pitch a network |

0:18:24 | and |

0:18:25 | this is |

0:18:26 | for example stochastic comes to grab but this is called the parse tree |

0:18:30 | okay K and has that all these algorithms have polynomial only complexity what you lose is |

0:18:34 | typically for a |

0:18:36 | hmm or any type of mark of via process when you do |

0:18:40 | filtering |

0:18:41 | the complexity |

0:18:43 | of going through T points of data is then you're T |

0:18:47 | a common filter of your processing a million points requires a million |

0:18:50 | time to state space |

0:18:51 | in this case for a stochastic on is free grammars actually cubic peak the like the day but it still |

0:18:55 | or no |

0:18:56 | it's Q |

0:18:58 | okay |

0:18:58 | so uh here is a example of of some rules which are just distorted you bit more be killed the |

0:19:03 | paper give the out with them simulations and so on |

0:19:05 | so here is a sort of |

0:19:07 | flow of traffic in normal life |

0:19:10 | at this is actually a fight of the google that what of the things you can do very nice use |

0:19:13 | you can actually qualify |

0:19:15 | different maps digital maps in constraints in used to go |

0:19:21 | so you you have talk at just go in a straight line or a bunch of targets maybe be today |

0:19:25 | and then to you notice that |

0:19:26 | some that like this people are deviating from |

0:19:29 | how would you all to make that how you come up the bayesian inference that |

0:19:32 | something funny was going on you the people of changes |

0:19:35 | so you would qualify |

0:19:36 | the rule that people have on |

0:19:38 | equal amounts away and come back to the rule by equal |

0:19:42 | which is at a |

0:19:43 | and this sum the go by is a random amount |

0:19:46 | so you a fixed but you have a memory which is variable |

0:19:49 | and that cannot be and coded is |

0:19:51 | markov of G |

0:19:52 | and of course you measuring these a noise was of a noisy sensor |

0:19:55 | you can construct a grammatical and so this fairly easily terms of production rules |

0:19:59 | once you do that |

0:20:00 | you can construct an optimal filter |

0:20:02 | which gives it the optimal estimate of what's what's going on |

0:20:06 | so |

0:20:07 | i in the time available of course that can the find to how you derive still does but they given |

0:20:11 | in the paper and also the reference here |

0:20:13 | so i just one a kind of conclude with some some of the main points |

0:20:16 | what i want say here |

0:20:18 | so the first point is that these syntactic models which are constructed |

0:20:22 | context free grammar model |

0:20:23 | a a to really look at a complex |

0:20:26 | trajectories of targets |

0:20:28 | which can not be dealt with it the markovian work |

0:20:31 | there are several classes of trajectories which you can prove |

0:20:35 | can not be generated by a markov process |

0:20:37 | and |

0:20:38 | so i in some sense this allows us to go into a slightly more general class |

0:20:43 | these are meant to replace the human operator who sits and looks at track lit |

0:20:48 | coming out of a track or you wanna automate that for |

0:20:51 | the second point is that these algorithms of polynomial complexity so the practical |

0:20:56 | just like in |

0:20:57 | for a state space model you call that filtering she you would call that a parsing out |

0:21:01 | because you really constructing a parse S |

0:21:05 | uh in many cases you can view this is a human sensor interface because there's been a lot of work |

0:21:09 | done in the actual sense signal processing |

0:21:13 | how can i or to made the process of using that |

0:21:15 | estimate from the sensor level |

0:21:17 | to something which helps if you mean |

0:21:20 | determine a something's happen |

0:21:21 | that's really what we're talking about you |

0:21:23 | is the human sensor interface also known as middle where |

0:21:27 | there's a very interesting system you ready issues which i haven't even mentioned you |

0:21:30 | okay so the first thing is if we have a parse tree which keeps generating these non terminal to eventually |

0:21:35 | stops |

0:21:36 | of course you need the eventual length of you string to be fine i |

0:21:40 | so how do you and sure that's the case |

0:21:42 | those that stability issues he's a course some pretty cool than what's of processes |

0:21:46 | and essentially you need to spectral norm a particular matrix vol be the probabilities to be less and one |

0:21:51 | there are also constraint |

0:21:54 | if you know that a the target gonna and up at some particular location |

0:21:57 | how do you constrain the probabilities in your mobile so that |

0:22:02 | so that you model actually ends up there |

0:22:04 | piece so those are also pretty interesting problem |

0:22:07 | fine are several unresolved issues and this problem as i said much of the detail as been developed in the |

0:22:12 | by informatics that are only the last ten fifteen years |

0:22:15 | and |

0:22:17 | things like how sensitive is your beige an estimator to prior |

0:22:22 | does it for get you initialization geometrically fast which just to for example the kalman filter so |

0:22:26 | these are unresolved issues |

0:22:29 | so in time sort of that picture there's been a lot of work done research in this area are just |

0:22:33 | given you a couple of |

0:22:34 | references here |

0:22:35 | uh and we also use this quite a bit not only to model trajectories but also to model |

0:22:41 | for example radars |

0:22:42 | complex sensors are not markovian because they |

0:22:46 | they have feedback structure with the them and and really you can you |

0:22:50 | the |

0:22:51 | oh put of a complex sequence of sensors as a grammatical more |

0:22:55 | and so essentially the idea is |

0:22:58 | by doing that you can apply |

0:23:00 | patient signal processing to come up with |

0:23:03 | so that's really all the one C thank you |

0:23:09 | five questions to me |

0:23:11 | that |

0:23:12 | you're all we should really be asking questions |

0:23:17 | um can you go back to your original picture at the beginning where you had the |

0:23:22 | the uh |

0:23:22 | legacy tracker and |

0:23:31 | oh |

0:23:32 | yeah uh yeah right so |

0:23:34 | you have here |

0:23:35 | um the |

0:23:37 | try what's coming into the middle level of and you have a portable and saying feedback |

0:23:42 | uh |

0:23:43 | can you call in a or |

0:23:45 | so one additional point did mention is now suppose you can infer that talk it is actually gonna in a |

0:23:50 | rectangle |

0:23:51 | you can use that information to help you track at the lower that |

0:23:54 | which you low level tracker does on know that you low level tracker simply assuming a top it's sit down |

0:23:58 | target |

0:23:59 | flipping a point deciding where it's still |

0:24:01 | a if you had this high level infers that you know what's gonna make a right to |

0:24:04 | or if you look at the global maps and you figure out the constraint that the role only you can |

0:24:08 | go left or right any can't go right out of the role |

0:24:11 | you can of course utilise information to |

0:24:13 | to improve you estimates and and that's that's really good |

0:24:16 | that |

0:24:16 | i did mention that you're so of course as a feedback structure where you can use that to |

0:24:20 | proof proof you |

0:24:25 | yeah |

0:24:31 | uh could you please one tell the differences between a stochastic context free grammar and the mark of tree as |

0:24:37 | a small doing and in is can a good point |

0:24:40 | okay so the idea here is |

0:24:43 | we're looking at a actually a a a probably go to that little picture which are |

0:24:49 | okay so |

0:24:51 | this would be a standard finite state markov chain which is kind of a |

0:24:55 | this is our parse tree which is really and count and what's and branching process so i start with |

0:25:01 | a bunch of parents they have children they actually not children and so on and it keeps scrolling |

0:25:05 | but the point is the order of the chill the name of the children matters in a standard multi think |

0:25:09 | type gas the what's process you only K they have three kids of for kids |

0:25:13 | here the name of the matter because they can wait different trajectories |

0:25:16 | eventually |

0:25:18 | all these children stop and that's your all of that the the thing you read and you meet a from |

0:25:22 | that to right and that's for example that all or some sort of straw |

0:25:26 | so that's the multi type gas and what's some process we considering here |

0:25:29 | so it's really a a realisation of for tree whose size |

0:25:34 | is not a you know |

0:25:36 | you could only |

0:25:37 | more the expectation of that |

0:25:38 | you to come up with the as of that and and so |

0:25:49 | okay well are out of time thank very much we go to the next speaker or no |