you one exactly and a boring points in tandem one my name's got the carnage the uh uh as it was set i from one diversity of technology and it's a cloud that rate rider to a what can not today and in my and i would discuss the extended inter view direct mode for the multi view video coding so to start with uh development of uh efficient it's a solutions for multi view video coding becomes more and more a an important these days as the the D you video applications are getting and more and more popular and in the standard to a solution no as the and X H of the a C H two six six four standard and a simple but the quite efficient a my it is you and in this standard to the solution and a not in order not to a a sense uh a a independent you so in order not to and got independent and every you of the multi you sequence and we uh uh i i a a reference pictures from other uh you to that uh reckon which are list so this is the main uh the domain and in there view my congressman man which is used in the multi you got me and unfortunately there read a then C be between a a a bit streams of the neighbouring use a is still quite a big so a uh phil four further problem exist and on the other hand and there is some some additional information the multiview view video which can be adopted a a to uh decrease this redundancy between a a bit streams in they bring use and this information is that that information that that well for the which are described where that three D geometry of the scene and a if we at all this is information in encoding process and we can get that uh that performance of the multiview view video codec and and should be bigger or and as a result the result and the bit strange should reduce so the main idea presentation still use that that information to improve the compression of the multiview video and use the use that's of that that information beeps i'll some uh a new possibilities a for prediction as we all know that you all the bit stream contains most the of the control data prediction error or motion data reducing using a a you two prediction and a and he of this uh uh that's of data thus it's the as uh make a a bit stream smaller smaller a a two day i'll discuss only prediction of the motion data to decrease their bit right oh what to let let's imagine that to we've got to be used to and go the reference view which was already in that the bit stream all this you was know at the time of uh a ink the other view and to what we are uh actually try to do is to predict the motion a data from the neighbouring you and using the three D dependencies between objects a C for T a dependencies are described by a that information oh yeah a bit more details and first of all of like to point out that the prediction of motion i i data from the right you the the independently for each point of and to and of and code that be a picture and we we are to only using that the that information of the ref you only a we check that that volume for that for each of the that a points in the river you and based on this information and we project to a a location in that feed this space using that that image based rendering so we get a a point location of pretty space and next we re project is position in and but that's to picture and and as a result we get to i a of a point one in the right you and one call it's you'll and this uh points are a part or so some uh a connection between that exist uh and it's it as there a a motion data for the pointing direction you is already known because you was already in bit so we can check the motion vectors selector speech or in all a which are uh in that block uh for this or and uh simply the right that two and describe the motion vectors and reference picture dies of that point in the column view so this a the main idea of the of a problem and evaluate uh this idea and we integrate it's with that and B C motion video coding and a reference scope there and and the plantation was the uh uh we do that uh S and you might have model a video compression and which we call the extended inter view direct mode so yeah i D uh now it's uh look how a dog to the a and B C a coding scheme and that's a that we've got the preview uh uh two and go that basic scheme which is used in and see it's more or less like they want to picture so we've got a a a a base you this C zero which is and and is the first well so for that based you there are null error you uh a available and there are two a more you see C two and you see well a i each of uh which are and that with one or two ref you i would like to a point out that it's uh for that use C tool the inter view a reference the uh which is the used due to this scheme is only for that and it for the the on or picture areas so for that time instance T zero uh place look at the picture for uh a time instance do you want to and others there is no our between you use C zero and C two and in contrast to for you C while a every a picture for a every time as there are are are between use C zero and to C one or you C two and C one uh however uh the only uh the only uh what if a question do the income comparison with stand a simple cost and called is that there right at least the the are modified in the way that it for each time in star uh features from a from use from thing but a from the neighbouring used are at their uh to that a rip which lee yeah after i think uh there i i E E I D mode two and and this the scheme we get some you uh prediction possibilities and i i would like to point out that this uh a are a smart that's right i i do not to a modified the referee features list a a as we don't the use a a new pictures and this this are was only a tell that a for this features we can is we can predict the motion information from other views using to good and what this where not not no also is that then you model used to only for that not not you because a based you no right you so there is no you to two to to uh to refer and the other of thing is that i new model can be applied only for that and on on for pictures because in none or pictures and there motion dot doubles we can read do and it's only that uh the the most but those uh describe that and uh them that the motion which is not collected a a different time instants well only two and different C you so and so we don't to use that and now a few words about the evaluation of the i the model and and we but de the I V D model with if too codecs and this is then B C four point all which is the reference the reference software and the previous one of the previous version of of a a of the software which also okay case the motion skip tool a adopted in that most speed a according to which use this uh is a similar idea all a a a pretty thing motion information from a neighbouring bureaus a cover it does not to the use and information about that or three D G so it's uh and an interest a them to this and in our test we used five different this sequences who book i've by parallel where as something power and of mine and the results were obtained for to be value for four for the six and four do and the last thing is that bit stream for for uh which are present uh are the bit streams the the for a single you so we only a so i would show only a results for the call you would vol and we've out the reference you bit stream or we about the bit streams to encode the data because in our approach we assume that and that that uh information already and i by to what called or for some other purpose so we don't do not include it's uh and uh uh uh the the B all the results and because uh in the reference uh use the sorry the ref multi well all this not this material there were only and three views available so there are a two possible scenarios to to check first number is that we are coding to you C two and and the reference you is to the base use of the use C zero and the other scenario is the encoding coding all use you one and a of the reference you C zero uh and finally the results so as you can see and you do you yeah i don't think the i i yeah i i D mode into D a a a a and and P C software and get some uh bit-rate rate reduction a in in there's the there's compare them with uh the referee we we you reference software and uh as we can see that be to bit reduction is uh for every at test sequence uh the are range one is that are part of the ref software with the motion speed or a label uh we also not is that the a a bit reduction and case of snow or two and compared of the reference software uh blue one uh also gives us a a a a a bit rate bit-rate rate reduction a cow what are in case of the a compress on the motion he we see that a there is a uh as sequence a at i'm maturity a a for which a the motion keep for the what but they're yes uh to summarise the result uh as we notice the extent a inter view direct mode improves compression surely and against reference software we get this bit-rate reduction for all this see test signal for or bit rate and and the average bit-rate savings were uh six six point nine percent a now to and point five point three percent in case of standard two and this you vol are have weighted to based the instrument we yes uh and compares them with the motion skip tool i we get the bit rate reduction from all this were checked and uh and the average bit which uh is savings are eleven point three percent and two point five percent for never two and a so i one and i two uh lean to conclude my and and a we shall that that information can be efficiently use to represent motion data and in multi view video a bit streams and do do you use that of the three young geometry we get a an Q right the of motion data from the you uh the whole algorithm a interest the the computation going with complexity of be called in english lee as so the this is true but in and and so the the plantation oh the new idea was and the for the extended inter view direct mode uh as nice as we so the compression that used to see a improvement was uh a two point blank to let and to three percent uh we call it can we draw is uh the more accurate that model will uh be a and the last but not the least the use it oh that from a such associated with uh and uh the reference you only and makes this solution adaptable able to than most of the multi view video applications so applications where for example texture and that are and called the to together dependent the and and this this this solution is also applied to no so thank you very much real a you have questions so like what me what sounds like you and the reason we we should to the we should note that the it increased uh the complex you of the you gotta because some additional orders the is you'll still a to project the pixel location and and some but uh some calculations are uh that but this is not much so what if you i and and are not not not this some that any questions i i think you're