0:00:13you
0:00:16one
0:00:18exactly
0:00:18and a boring points in tandem one
0:00:20my name's got the carnage the
0:00:22uh uh as it was set i from one diversity of technology
0:00:26and it's a cloud that rate rider to a what can not today
0:00:30and
0:00:31in my
0:00:31and i would discuss
0:00:33the
0:00:34extended inter view direct mode for the multi view video coding
0:00:39so to start with
0:00:40uh
0:00:41development of uh efficient it's a solutions for multi view video coding becomes
0:00:46more and more
0:00:48a an important these days
0:00:49as the the D you video applications are getting
0:00:54and more and more popular
0:00:56and in the standard to a solution no as
0:01:00the and X H of the a C H two
0:01:03six six four standard
0:01:05and a simple but the quite efficient a my it
0:01:09is you
0:01:10and
0:01:11in this
0:01:12standard to the solution
0:01:14and a not in order not to a a sense uh
0:01:18a a independent you
0:01:20so
0:01:21in order not to and got independent
0:01:24and every you of the multi you sequence
0:01:26and we uh uh i i a a reference pictures from other uh you
0:01:32to that
0:01:33uh reckon
0:01:34which are list
0:01:35so
0:01:36this is the main uh
0:01:37the domain
0:01:38and in there view my congressman man
0:01:41which is used in the multi
0:01:42you got me
0:01:44and unfortunately there
0:01:46read a then C be between a a a bit streams of the neighbouring use
0:01:51a is still quite a big so a uh
0:01:54phil four
0:01:55further problem exist
0:01:57and on the other hand
0:02:00and
0:02:01there is
0:02:02some some additional information the multiview view video which can be adopted
0:02:07a a to uh decrease this redundancy between a a bit streams in
0:02:12they bring use
0:02:13and this information is
0:02:15that that information
0:02:18that that well for the which are described where that three D geometry of the scene
0:02:23and a if we at all
0:02:25this is information in encoding process
0:02:28and we can get that uh
0:02:31that
0:02:32performance of the multiview view video codec
0:02:35and and should be
0:02:36bigger or and as a result
0:02:39the result and the
0:02:40bit strange should reduce
0:02:43so the main idea
0:02:45presentation
0:02:46still
0:02:46use that
0:02:48that information to improve the compression of the multiview video
0:02:53and
0:02:53use the use that's of that that information beeps i'll some uh a new possibilities
0:03:00a for prediction
0:03:02as we all know that you all the bit stream
0:03:05contains
0:03:06most the of the control data prediction error or
0:03:09motion data
0:03:10reducing using a a you two
0:03:12prediction
0:03:13and a
0:03:15and he of this uh uh that's of data thus
0:03:17it's the
0:03:19as uh
0:03:20make a a bit stream
0:03:21smaller smaller
0:03:22a a two day i'll discuss only prediction of the motion data
0:03:27to decrease their
0:03:28bit right
0:03:30oh
0:03:31what to let let's imagine that to
0:03:34we've got to be used to and go
0:03:36the reference view which was already in that the bit stream
0:03:40all this
0:03:41you was
0:03:41know
0:03:43at the time of uh a ink the other view
0:03:47and to
0:03:49what we are uh actually try to do is to predict the motion
0:03:53a data
0:03:54from the neighbouring you
0:03:56and using
0:03:57the three D dependencies between objects
0:04:00a C
0:04:01for T
0:04:01a dependencies are described by a that information
0:04:07oh yeah
0:04:08a bit more details
0:04:10and
0:04:12first of all of like to point out that
0:04:14the prediction of motion i i data from the right
0:04:17you
0:04:18the the independently for each point of and to
0:04:22and
0:04:23of and code that be a picture
0:04:26and
0:04:28we we are to only using that the that information of the ref
0:04:32you only
0:04:34a we check that that volume for that
0:04:36for each of the that
0:04:38a points in the river
0:04:40you
0:04:40and based on this information
0:04:43and
0:04:44we project
0:04:45to a a location in that feed this space using that that image based rendering
0:04:51so we get a
0:04:52a point location of pretty space
0:04:55and next
0:04:56we re project is position in
0:04:59and
0:05:00but that's to picture
0:05:02and
0:05:03and
0:05:04as a result we get to i
0:05:06a of a point
0:05:07one in the right
0:05:08you and one call it's you'll
0:05:10and this uh points are a part
0:05:14or so
0:05:15some uh a connection between that
0:05:18exist
0:05:19uh
0:05:20and it's it as there
0:05:22a a
0:05:23motion data for the pointing direction
0:05:26you is already known because
0:05:28you was already in bit
0:05:30so we can check the motion vectors selector
0:05:32speech or in
0:05:33all
0:05:34a which are
0:05:35uh in that block
0:05:37uh
0:05:38for this
0:05:38or
0:05:39and uh
0:05:41simply the right that two
0:05:43and describe
0:05:44the motion vectors and reference picture dies
0:05:47of that point
0:05:48in the column view
0:05:49so this
0:05:50a the main idea of the
0:05:52of
0:05:53a problem
0:05:54and evaluate
0:05:56uh this idea
0:05:58and
0:05:59we integrate it's with that and B C motion video coding
0:06:03and a reference scope there
0:06:06and and the plantation was
0:06:08the
0:06:08uh uh
0:06:09we do that
0:06:11uh S and you might have model
0:06:13a video compression
0:06:15and
0:06:16which we call the extended inter view direct mode
0:06:19so yeah i D
0:06:21uh
0:06:23now it's uh look how a dog to the a and B C a coding scheme
0:06:29and that's a that we've got the preview
0:06:32uh uh two and go
0:06:33that
0:06:34basic scheme which is used in and
0:06:36see
0:06:37it's more or less like
0:06:38they want to picture
0:06:40so we've got a a a a base you
0:06:42this C zero which is
0:06:44and and is the first well so for that based you there are null
0:06:48error
0:06:49you
0:06:49uh a available
0:06:51and there are two a more you
0:06:54see C two and you see well
0:06:57a i each of
0:06:58uh which are and that with
0:07:00one or two ref
0:07:02you
0:07:03i would like to
0:07:04a
0:07:05point out that it's uh
0:07:07for that use C tool the inter view a reference
0:07:11the
0:07:12uh which is
0:07:13the used due to this scheme
0:07:15is only for that
0:07:17and it for the the on or picture areas
0:07:20so for that
0:07:21time instance T zero
0:07:23uh place look at the picture
0:07:25for uh a time instance
0:07:26do you want to and others
0:07:29there is no our between you use C zero and C two
0:07:34and
0:07:36in contrast to for you C while
0:07:38a every a picture
0:07:42for a every
0:07:42time
0:07:43as there are
0:07:44are are between use C zero and to C one or
0:07:48you C two and C one
0:07:51uh
0:07:52however
0:07:53uh the only uh
0:07:55the only uh what if a question do the income comparison with
0:08:00stand
0:08:01a simple cost and called is that there right
0:08:04at least
0:08:04the the are modified
0:08:07in the way that it for each time in
0:08:10star
0:08:11uh
0:08:12features from a from
0:08:13use from thing but
0:08:14a from the neighbouring used are at their uh to that
0:08:18a rip
0:08:19which lee
0:08:21yeah
0:08:22after i think uh there
0:08:24i i E E I D mode two
0:08:27and and this
0:08:28the scheme
0:08:30we get some you uh
0:08:32prediction possibilities
0:08:34and i i would like to point out that
0:08:36this uh
0:08:37a are a smart that's right
0:08:40i i do not to a modified the referee
0:08:43features list
0:08:45a a as we don't the use
0:08:47a a new pictures
0:08:49and
0:08:50this
0:08:51this are was only a
0:08:53tell that
0:08:55a
0:08:55for this features we can is
0:08:59we can predict the motion information from other views using to good
0:09:05and
0:09:06what this where not not no
0:09:08also is that
0:09:09then you model
0:09:11used to only for that
0:09:13not not you
0:09:15because a based you
0:09:16no right
0:09:17you
0:09:19so there is no you to two
0:09:21to to uh to refer
0:09:24and the other of thing is that i
0:09:27new model can be applied only for that
0:09:30and on on for pictures because in none or pictures
0:09:33and there
0:09:34motion dot doubles
0:09:36we can read
0:09:36do
0:09:37and
0:09:38it's only that
0:09:40uh
0:09:41the the most but those uh describe that
0:09:44and uh
0:09:45them that the motion which is not collected
0:09:48a a different time instants
0:09:50well only two
0:09:51and different C you
0:09:53so
0:09:54and so we don't to use that
0:09:57and
0:09:58now a few words about the evaluation of the i
0:10:01the model
0:10:03and and we but
0:10:04de the I V D model
0:10:06with if too codecs
0:10:08and this is
0:10:09then B C four point all which is the reference
0:10:12the
0:10:13reference software
0:10:15and
0:10:16the previous one of the previous version
0:10:18of
0:10:19of
0:10:19a a of the software
0:10:21which also
0:10:22okay case the motion skip tool
0:10:24a adopted in that
0:10:26most speed
0:10:27a according to which
0:10:29use this uh is a similar idea all
0:10:31a a a pretty thing motion information from a neighbouring bureaus
0:10:35a cover it does not to the use
0:10:38and information about that or three D G
0:10:42so it's uh and an interest
0:10:44a them to this
0:10:46and
0:10:47in our test we used
0:10:49five different this sequences
0:10:50who book i've by parallel where
0:10:53as something power and of mine
0:10:56and
0:10:57the results were obtained for to be value
0:11:00for four for the six and four do
0:11:03and the last thing is that
0:11:05bit stream for for uh
0:11:08which are present
0:11:10uh are the bit streams
0:11:11the the for a single you so we only
0:11:14a
0:11:15so i would show only a results for the call you would vol
0:11:20and we've out the reference you
0:11:22bit stream or we about the
0:11:25bit streams to encode the data
0:11:28because in our approach we assume that
0:11:30and that that uh information already
0:11:33and i by to what called or
0:11:35for some other purpose
0:11:38so we don't do not include it's uh and uh uh uh the the B
0:11:43all the results
0:11:46and
0:11:47because uh in the reference uh
0:11:50use the sorry
0:11:51the ref
0:11:52multi well all this not this material
0:11:55there were only and three views available
0:11:58so there are a two possible scenarios to to check
0:12:02first number is
0:12:03that we are coding to you C two
0:12:06and and the reference you is to the base use of the use C zero
0:12:12and the other scenario is
0:12:13the encoding coding all use you one
0:12:16and a of the reference you C zero
0:12:21uh
0:12:21and finally the results
0:12:23so
0:12:24as you can see and you do you
0:12:27yeah i don't think the i i yeah i i D mode into D
0:12:30a a a a and and P C software
0:12:33and
0:12:34get
0:12:34some uh bit-rate rate reduction
0:12:38a in in there's the
0:12:40there's compare them with uh the referee
0:12:43we we you reference software
0:12:45and uh as we can see
0:12:47that
0:12:47be to bit reduction is
0:12:50uh for every at test sequence
0:12:53uh
0:12:54the are range one is
0:12:55that are part of the ref
0:12:56software
0:12:57with the motion speed or a label
0:13:01uh
0:13:02we also not is
0:13:03that
0:13:03the a a bit reduction
0:13:06and case of snow or two
0:13:08and
0:13:09compared of the reference software
0:13:11uh
0:13:13blue one
0:13:14uh
0:13:15also gives us a a a a a bit rate
0:13:18bit-rate rate reduction
0:13:20a cow what are in case of the
0:13:22a compress on the motion he
0:13:24we see that a there is a
0:13:27uh as sequence
0:13:28a at i'm maturity
0:13:30a a for which a the motion keep for the
0:13:33what but they're
0:13:34yes
0:13:35uh
0:13:36to summarise the result
0:13:38uh
0:13:39as we notice the extent
0:13:41a inter view direct mode improves compression surely
0:13:45and
0:13:46against reference software
0:13:48we get
0:13:49this
0:13:49bit-rate reduction for all this see test signal
0:13:52for or bit rate
0:13:54and
0:13:55and the average bit-rate savings were uh
0:13:58six six point nine percent
0:14:00a now to and
0:14:01point five point
0:14:03three percent
0:14:04in case of standard two
0:14:06and this
0:14:07you vol are have weighted
0:14:09to
0:14:11based
0:14:11the instrument we yes
0:14:14uh
0:14:15and compares them with the motion skip tool
0:14:18i we get the bit rate reduction from all this
0:14:22were checked
0:14:23and uh and the average bit which
0:14:25uh is savings
0:14:26are eleven point three percent
0:14:28and
0:14:29two point five percent
0:14:31for never two and
0:14:33a so i one and i two
0:14:35uh
0:14:36lean
0:14:38to conclude my
0:14:39and
0:14:40and a
0:14:42we shall that that information
0:14:44can be efficiently use
0:14:46to represent motion data
0:14:49and in multi view video
0:14:50a bit streams
0:14:53and
0:14:53do do you use that of the
0:14:55three young geometry we get a an Q right
0:14:58the of motion data from the you
0:15:02uh
0:15:03the whole algorithm
0:15:04a interest
0:15:06the the computation going with
0:15:07complexity of be called in english lee
0:15:11as so the this is true but in
0:15:13and and so the the plantation
0:15:16oh
0:15:17the
0:15:17new idea was
0:15:19and the for the extended inter view direct mode
0:15:22uh
0:15:23as nice
0:15:24as we so the compression that used to see
0:15:26a improvement was
0:15:28uh a two point blank
0:15:30to let and to three percent
0:15:32uh
0:15:33we call it can
0:15:34we draw is uh the more accurate that model will uh be a
0:15:40and the last but not the least
0:15:42the use it
0:15:43oh that
0:15:44from
0:15:44a such associated with
0:15:46uh
0:15:47and uh the reference you only
0:15:50and makes this solution adaptable able to than most of the multi view video applications so
0:15:55applications where for example texture and that are and called
0:15:59the
0:16:00to together
0:16:01dependent the
0:16:02and
0:16:03and
0:16:04this
0:16:04this this solution is also applied to
0:16:08no
0:16:09so thank you very much
0:16:11real a
0:16:18you have questions
0:16:24so
0:16:25like
0:16:27what
0:16:30me
0:16:33what sounds like
0:16:36you
0:16:38and the reason we we should to the we should note that the
0:16:43it increased
0:16:44uh
0:16:45the
0:16:45complex you of the you gotta because
0:16:48some additional orders
0:16:50the is you'll still a to project the pixel location
0:16:54and and some but uh
0:16:56some calculations are uh that but
0:16:59this is not much
0:17:01so
0:17:01what
0:17:04if you
0:17:05i
0:17:06and and are not not not this some that
0:17:12any questions
0:17:14i
0:17:16i think you're