0:00:13okay
0:00:14this is gonna be
0:00:15a short presentation
0:00:16i problems
0:00:18and the title of the talk is the general framework for a a choice D based indexing and retrieval with
0:00:24a order data
0:00:26it this is done with my
0:00:27i a student should lee and formers to
0:00:30and channel shown she who is now it to nokia research
0:00:37and the
0:00:38the goal is to do retrieval
0:00:41and indexing indexing and retrieval
0:00:44based on motion trajectory data
0:00:46this is an an all problem people first begin to look at this
0:00:50issue in the late nineties
0:00:53and
0:00:54a done a quite a bit of work
0:00:56a most of the work centres round
0:00:58yeah
0:01:00doing it and using different modalities to reduce the dimensionality
0:01:03we introduced in two thousand and three a method of doing it using pca
0:01:08but then we wanted to extend it to working with
0:01:11yeah have multiple trajectory simultaneously
0:01:14and so we had to work in tensor space
0:01:17and so we try to do something like pca intense or space
0:01:20and we to the number of different techniques
0:01:23a yeah
0:01:24for
0:01:24for tensor decomposition
0:01:26based on
0:01:27yeah
0:01:28using higher order S D
0:01:31and paris type model and another technique which we develop their cells
0:01:35and the word various off
0:01:37what we don't gonna focus on to they use the issue of how to deal
0:01:41this
0:01:41problem when you're dealing with tensor
0:01:44but now of the query dimensionality does not mention
0:01:47does not match the tense a dimensionality
0:01:50meaning
0:01:51you may have a different number of objects
0:01:53a um
0:01:54different like
0:01:56or
0:01:57in particular the case that we have here is different number of camp
0:02:01so we actually are dealing with a different number of objects and different number of cameras
0:02:05and the query we for example may have a single camera
0:02:08and the database has multiple camera
0:02:11and so the question is how can you do this the without having to we compute
0:02:15a separate indexing for each scenario
0:02:19and some than the talk a a a a little bit about
0:02:21the invariance properties of the H of as U D
0:02:25and how to apply to the indexing retrieval problem and present some experimental result
0:02:31and so the basic scenario of using a a high order svd for indexing and retrieval
0:02:37consist of
0:02:38looking at multiple motion trajectory
0:02:41and of from
0:02:42yeah of multiple targets simultaneously
0:02:45and then
0:02:46i
0:02:47having a compact representation in the form of a tensor as
0:02:51and finally reducing the dimensionality and in this particular case but going to focus on a order is but
0:02:58a more more properly people refer to it as tucker decomposition that would be a more accurate
0:03:04in know processing that term
0:03:06a choice of D of becoming brain
0:03:08even though it's not use the terminology
0:03:11uh they
0:03:13origin of this is the following so if you look at a single
0:03:17trajectory we can model it to say to L usually X and Y coordinates
0:03:22of the trajectory over time
0:03:24if we have two trajectories
0:03:26we model it as a matrix
0:03:28if
0:03:28and we look at then i at the space of all of these
0:03:31pair of trajectories
0:03:33we get a tensor
0:03:34a three dimensional array
0:03:37and this is from what a single camera
0:03:39if we now want to extend that for looking at multiple cameras in particular and this case two cameras
0:03:45we have to three dimensional or rate or a four dimensional array
0:03:49and so it forms a higher-order tensor
0:03:52and you can continue this using multi modality you could like this
0:03:55same trick for doing indexing and retrieval
0:03:58yeah
0:03:58for having different modalities
0:04:01you can go higher dimension and higher dimension
0:04:04no the reason we wanna work with a choice is with is because of the following theorem and what i
0:04:09done the here is i've actually just the
0:04:11loosely paraphrase
0:04:13the the and words
0:04:14that precise mathematical description of the theory
0:04:17a paper
0:04:18and about uh um
0:04:19a page and a half of the paper
0:04:21devoted to the proof of the theorem
0:04:23but basically with the cr says is something which is quite into it
0:04:27a we are all familiar with the for a transform
0:04:31and if you have a multi the dimension of for a transform any now wanna take yeah the three dimensional
0:04:36for transform that thing
0:04:37and you now to take that two dimensional fourier transform only
0:04:40it's sufficient to just simply look at the corresponding to the mentioned
0:04:44they will have
0:04:45the the you can just take the inverse with respect to the third one and will have the right
0:04:49two dimensional fourier transform
0:04:51and the reason for that is because of the orthogonality property
0:04:54of the four yeah base
0:04:56and the same thing is true a here
0:04:58that is if i take a age of is he D and i decompose at it's decomposed into a tensor
0:05:04and
0:05:05in
0:05:06unitary matrices
0:05:07and so because of the a or orthogonality with the unitary property of those matrix
0:05:12if find out think the scene it's sub tensor
0:05:16so to get portion of the original tensor
0:05:18and
0:05:20i you can apply to a H of P D
0:05:22i will get the same corresponding unitary matrices
0:05:26for the dimensions of a in which i chosen for the subtensor tensor
0:05:29and i do not need to calculate them again from scratch
0:05:32which means of the corresponding indexing of the sub tensor
0:05:35would be identical
0:05:36a a of the same mold
0:05:39or the same unit are a major
0:05:41so if you want to precise mathematical description of what i just said and what's written here
0:05:46it's in the paper and a proof of it is in the paper
0:05:48and i should say one more thing this is uh a a result that was first
0:05:52oh for three the mention tensor
0:05:55yeah three order tensor
0:05:57by that change how how as part of is a P D as at university of london
0:06:02and what have done in this paper is
0:06:03extended to our bit-rate dimension
0:06:06the result
0:06:06it
0:06:07always true no matter what dimension
0:06:10but it is a critically important thing for us because if we were to work with a different type of
0:06:14decomposition
0:06:15like paris
0:06:16or parallel factor analysis
0:06:18or can a call or any of the other one
0:06:21a property fail
0:06:23and we would be unable to do anything that we're doing in this paper
0:06:26because you would have
0:06:27to we compute everything from scratch for each such that
0:06:32and so that that we have this property we can proceed along the lines of the original work that we
0:06:38did for tensor decomposition
0:06:39X this time we do it a lot a sub tensor is only
0:06:43so the indexing part
0:06:45and proceeds along the very same lines we have a H of ways P the we compute for the tensor
0:06:50in this case the four dimensional tensor
0:06:53and
0:06:54a with take the mode
0:06:56of the query
0:06:58and do it
0:06:59similar decomposition but this time we do it only along
0:07:03the M
0:07:04a modes if we choose
0:07:06and then yeah are we
0:07:09slice
0:07:10and a T have
0:07:11in index set tensor as
0:07:14and with the number of index tensor is is computed
0:07:16a the following
0:07:18for
0:07:21and for the retrieval procedure we simply
0:07:24and a compare the query index
0:07:27and yeah
0:07:28to the to the query tensor that we that we have obtained before
0:07:32and then a just simply do a frobenius norm between the two
0:07:36so the algorithm to be compute
0:07:38is essentially the same
0:07:40as we presented a uh several years back
0:07:43on
0:07:43tensor
0:07:44base
0:07:45a a comparison for indexing and retrieval of motion trajectory
0:07:49the main difference between this work and uh uh and our previous
0:07:53is in our previous work it was generic didn't care what
0:07:56tensor decomposition channel
0:07:59and it applied it on the same
0:08:01a dimensionality of then sir for the query
0:08:03and for the data
0:08:04and that's a
0:08:05a strong assumption
0:08:07yeah because we we have no control over the query size
0:08:10and this is especially true when you're dealing with multiple cameras
0:08:13and multiple camera tensor
0:08:15a queries
0:08:16because
0:08:17not all cameras have access to the same trajectory simultaneously
0:08:21and so of the main difference here is that we are only looking at the substance or
0:08:25for which they gave available
0:08:27and then
0:08:28comparing compare and then
0:08:29obtaining the corresponding a uh
0:08:32query representation from our original in
0:08:36which is index over all possible modality
0:08:40and so here the uh experimental results for work
0:08:44and uh these are collection of
0:08:47tensor is of a from the caviar datasets from in
0:08:52and
0:08:53these are from two cameras sets
0:08:56and this is the uh precision-recall recall curve
0:08:58corresponding and this is for complete queries
0:09:02and
0:09:03the
0:09:06resulting yeah uh these that the in matrix sizes
0:09:10and here are are are the indexing time and retrieval time
0:09:13and i should say that the uh
0:09:16indexing time is
0:09:17for a choice of be D are traditionally very good
0:09:20and where they suffer is a which remote time
0:09:22we do not
0:09:23remedy this
0:09:24and yeah
0:09:25we
0:09:27the of five well perform the retrieval times here
0:09:30and what we have to say is that we have to pay this price
0:09:33if we want to have the flexibility
0:09:35of dealing with different yeah size subtensor as
0:09:38in the query and a database
0:09:43and
0:09:43here we do the same thing but for partial queries
0:09:48so they query and the data size are not same size
0:09:52and these are the corresponding precision recall curve
0:10:03so
0:10:04short
0:10:05our our am am main
0:10:07messages
0:10:08a shows with D or type decomposition
0:10:11because of its sort the orthogonality
0:10:13is particularly useful in applications where
0:10:16you
0:10:17do not know in advance
0:10:19what are the dimensionality is and you need to make
0:10:21a mix and match it during query time
0:10:24and so we have applied this general principle in our case to motion trajectories
0:10:28but it can be applied to any higher order data
0:10:31an analysis with the retrieval or not
0:10:34and show that it actually yeah
0:10:36the
0:10:37very well
0:10:40thank you very much