thank you mister chairman for the introduction

uh my name is because the company and i'm just assuming that it's and university

and this is a beautiful campus assumes both so you can um

well i wanna thank you for being here at the last the presentations and i'm

going to talk about the sampling pattern you i'm going to introduce this model in

a brief and clear way i hope to be useful at christmas from you and

this has been a work together we um but clean vocals and what ensure strong

well

first i would like to thank our funders which supported us to all this work

and this will be the conference i'm going to talk about first i will give

a little bit about the background the motivation of the work and um then i

will introduce the model

the spc model reviewing the school and the definitions and how we generate this pc

applied to some known cases and we will see if it is good enough for

us to extract features of interest and then i will compute the work together with

some hints of the future work

i'd like to mention that a group in a mixture then university's name to realistic

three D and we are more or less interested in a three D information movies

images or video and we kind of have the whole chain of a three D

starting from the capturing processing transmission and the post processing and adaptation to the display

and the viewing experience so we cover the whole chain but this will be about

the capturing where and when we talk about capture we mostly mean cameras and parameters

related to them so we will talk about cameras in this presentation and um when

we say camera there that can display now configurations about them you're familiar with some

of them porsche that all of the maybe the snr cameras or very available we

know about them but for specific applications such as really capturing or different kind of

capturing is we are going usually to um unconventional camera setups and it's important to

be able to model them you know the level that the light and to be

able to extract parameters with low complexity at the same time would level of description

and for example here this setup is uh used in and is it is upon

and um well they use this setup which is a camera here and the lens

every yeah to capture a digital photography and this is a very famous camera array

setup maybe we have signal before and this is a different camera setup you wanna

should diversity and the famous like true or maybe it's assumed to be paying the

slideshow camera which is a plenoptic camera so there are different configurations and uh things

about them and we would like to have a model to be able to extract

parameters

like what we have here um i want to say that there are different parameters

related to one camera and um i haven't seen that kind of map or kind

of measure to be able to put them in a scale and to be able

to compare demand for example say camera one is better than camera to in that

sense and if you do this to the camera and then these parameters are changed

in this way and that kind of behavioural at the same time a descriptive information

about the camera system and what is a camera i mean the camera system can

do and how is this so different setups for the camera and these are usually

parameters of interest for different applications and which i have pointed out here are more

or less related to the focal properties of the camera and i will come to

this point later for example for uh an application that angular resolution in a certain

plane is more important for us maybe we can extract parameters using the model and

see that camera to which is um shown by right column here is better for

this application at this is space from the object or

not able anyhow to compare camera one and two and come to a conclusion which

one to choose for what much modification to apply to the camera to get a

better result so maybe remarks about that work we are doing it is to keep

the complexity of the model low at the same time to give it a high

descriptive network which um can be used for extracting features or modeling the system

yeah that is widely used and uh i've seen many to be instance here is

for example there may be more models but these are too difficult examples one of

the ray-based model which is um considering light as the light rays for names and

the familiar with the two plane representation and having one point in each plane and

the line connecting these two points are considered as the right and we call with

this description and the method is usually approximated which we consider the light and on

the right angle to the optical axis is a small enough to apply some uh

approximations and it is widely used in different applications such as ray tracing photography microscopy

or telescopes and um they are all familiar with this model a more comprehensive model

more complex model is the wave optics model which is the using the light and

electromagnetic wave and the method which is working with this electromagnetic waves or

usually starting from maxwell equations and harmonic waves and fourier theory and it is able

to uh explain more properties well at the expense of more complexity and all

we are going to

somehow interim something between these two models and the scope of the work we are

uh doing is well oh

only in it is that it will be a geometry based model and it will

exclude anyhow the wave optics at least at the se stage and it applies to

the optical capturing system which can be as i said conventional cameras or new setups

so the motivation of the work is to have a framework for modeling complex capturing

systems and we uh expect that this model providers kind of two words to be

able to extract properties from the system and at the same time can be keeping

in mind low complexity and the highest level of the model

so basically what the model can be applied to different camera setups and we generate

the spc using tools or mathematics for geometry we have and well i was trying

to show something like the spc model which is the sampling pattern cube so and

put a Q and put this might samples inside which are in the form of

the light containers which are introduced to and from this model we are extracting features

i

well this model is helpful for visualising purposes also and also describing the sampling behavior

of the system

there can be wide applications for these models first of all study and design of

the capturing system can you want application and uh investigating system variations if we have

a system and the at some parts with or with very the distances are properties

of the system how it is reflected the sampling behavior of the system i

the one i pointed out at the second or third the slide it is investigated

inter camera records which one is better in what sense for this application or you

have to compare it to different camera setups and one possible application can be adaptation

of the post processing algorithms on which i will give some more about

well in this sampling pattern cube i'm talking about it is uh there is a

very simple idea and this is um originating from the light samples in this model

light samples in this model are in the form in the form of light containers

and uh we can put it along like tracing the rain model and what is

special about like containers is that they are focus light there are formal focused like

so there is a point which all by phrase or one of the light rays

are passing to read point and we call this point at the position of the

light container and there is an angular span associated with the lights container and in

this representation we have four angles associated with it but this is a representation at

this estate so a light container which we usually um once the bases in the

slice coming next as the tip position and angular span and information is somehow the

vocal properties of the system are somehow coded in this all samples and

well the light

containers will then produce the sampling pattern cube which i show but you hear and

is a small like containers inside distributed inside the um Q and so we can

say that the sampling pattern cube is a set of this slide container and i

will show how to generate again how to use it so basically we all we

had a camera and there is the space between the space in front of the

camera and we tried to um provide information about how this space example by this

camera using a light container elements

and for ease of illustration we

we have some simplifications here in that presentations i will the uh slice coming i

will consider only one row of the on image sensor oh to be a starting

weight and i will not go to the to an image sensor it would be

too complicated to put it on a plane to show that i won't show like

containers into the instead of three D representation by only go for xt representation and

two angles the starting and finishing and the chip so this is a simplification we

do for illustration purposes and

there is one more thing uh in judea space instead of X Z if you

have like containers like this predicted position and an angle starting to finish the angle

we

transforming them to position angle representation and disposition angle representation is basically x-axis and the

taxes here so that the X one like here the key position same we have

a span we have an angular span in that uh access and

instead of seeing cones like this we will have

piece of lines like this and

we should have in mind that is lines are

horizontal and be presented means the like is in focus it means there is one

positional information associated to the whole line which is the cheap of the light depending

on

so we will face only positions like this horizontal lines in the sampling pattern cube

and this one shows the simple idea behind how we generate the sampling pattern cube

we basically start from the camera if we consider this part inside the camera and

is the optical elements in the camera there can be only in main lens or

a combination of different lens setups and this is the sensor plane

we tried what we are going to do is to a form a light containers

on the sensor plane based on physical properties of the sensibly light acceptance angle of

the it's sensor so light acceptance angle and we define the first set up

like containers then we backtracked this like containers into the C and you see my

delight container is passing to an optical element that container transforms to a new container

for example this one is transformed to this one so in new T position and

angular information is associate that is um you meant to the light container and finally

in an iterative process we a project all the initial light contenders to the three

D scene in front of the camera and what we get is called the sampling

pattern cube we will work with that later well this is a more formal presentation

of the same process we have the flowchart we um

actually form the like and painters and then go to this iterative no and process

for a project all the like containers to the scene and finally we come up

with the set of light containers in the form of the specimen

well i will not go to the very detailed what to just give you some

idea we have optical um elements like lenses or like apertures and so on you

can refer to the paper for way more information but anyway for example here if

like containers comes to an aperture for an aperture you know where the aperture you

so we know which the plane the aperture temperatures located on and we know this

time

or

the opening gary of the aperture and the lack container coming to this aperture well

marcus cut out because it's not be in this kind of the aperture and part

is staying here so we will have a new one that container like this cutting

part which is not on inside aperture span and

we will come to see point which is the new like container and we will

go to the next iterated steps and for example for a lens

if this is the lens plane and we know the focal properties of the lens

and we know the lens equation then not contain upcoming this plane well you transform

to a new one

and you position and angular span is given to the new light container and we

go to this process

until all like containers are processed and no this is a very simple example the

schematic them out

a single lens system

if this is the image sensor and this is the single lens system

we have project information from the image plane to the three D space in front

of the camera or here if Z welcomes us consider the plane of the main

lens

and see what the minus the use the plane of the image sensor then is

lines showing the in focus light which is the form of the light container as

it so before are i projected to another plane so you see that the angular

span of the light that there is a big change as well as their positional

information has been changed and now we have a new set of light containers in

the form of the spc that we can extract properties of interest from

well um we want to show that

but like standing there is actually reflecting the behaviour of the system or in the

better weren't the spc model in general is reflecting the behavior the sampling behavior of

the system and to show that we are applying this explicit model to a known

cases plenoptic camera

in the uh can um conventional form and the focused plenoptic camera

i hope you're familiar with system setups um i give some method with details about

them but all well i think these are well known systems are system is uh

containing the main lens lenslet carry an image sensor is placed behind the main the

lenslet i and C two systems both have the same optical elements and the only

difference between them is that distances between the a colour elements here we have

uh it's between the lenslet area of the image sensor as the for the focal

length of the lenslets

here it's not the same we have space E and it is smaller than the

and there is a relay system relation between the image plane which is here

and the sensor image sensor

and the main lens is pushed with forward so the spacings and basically different although

the optical elements are the same and this is a slight difference gives them very

different properties in terms of sampling and

um high-level properties of the camera like resolution like um depth of field the local

properties

well these are just a bit more information about uh one of the camera

i for the first split up to cover the conventional form and i would like

to highlight that the spatial resolution is equal to the number of lenslets in this

setup after we render images the spatial resolution of the images are equal to the

um number of lenslets and there is a trade-off so if you raise the number

of lenslets the spatial resolution scoring higher but angular resolution is going to be over

so this is the main feature associated with pc i and i would come back

to this point later

and pc F structure

which as a set has a relay system between the main lens um we commend

as there is a relay system and um it can be considered as an aerial

cameras inside the camera so the behaviour is more or less similar to the camera

panning and there are multiple positional information for each angular information and the spatial resolution

is which is a decoupled from the number of lenslets in this setup is the

main difference between the two camera by and

there also is that the numbers you have used for our simulations these are typical

numbers and um there have been practical setups with this number of the basic thing

i want to highlight here is that the only difference between none of the camera

i and are in the spacings

between the main lens and then on the other hand from the lens the area

to the sensor and the rest of parameters are the same so what accuracies result

from the difference in the

spacings and these are typical

in spc shapes we expect from the camera i and a lot of the camera

that you see that we have a kind of area here sample and here we

have very narrow area in the form of a line may be spread in the

space

and uh the angular span we see here is where considerable and angular span of

the samples are very small here and oh

here is a closer look the same information we have this for plenoptic camera i

and we can see in there

instead

this is the area sampled actually um the density is too high so it's just

we have the shape here as a color but if you look from the inside

uh in setup is um

information we can see the sample like samples your in the form of the light

containers and we can see multiple angular samples for a single position is a single

position this is expenses so there are multiple angular samples forcing the position and this

one is showing samples coming from behind one lenslet so the information behind one lenslet

captured on the image sensor are the formal column

in the case of plenoptic camera i

and this is that we also you

basically the same data and this is the case um plenoptic camera have

and we see the sampling properties are different these are the multiple position samples for

one single angular span and we can see the sample data by pixels behind one

lenslet

i hope to give you the impression that um

spc is following the behaviour of the camera system so on the next slide is

showing actually if we apply variations in the camera system is variations are reflected in

the spc and this variation i've decided to be the variation of the lenslet pitch

size in this case we can see the information we bill in its size of

um interest haven't changes when the its size of the lenslets are battery and we

can see the trade off between angular and spatial resolution in one of the camera

i case lighting or not the camera is there is no trade off and um

it is confirming that this spc model is falling the behaviour of the system

but i do not talk about uh about the feature extractors are which is an

ongoing work also we are more or less now focusing on the resolution parameters and

um

this features extractors as a set can be informal focal plane field-of-view spatial resolution in

different depth planes and angular resolution and that solution and different focal properties and um

we hope to publish some um the results in this part and i want to

conclude that the light field sampling behavior is reflected in this model and all since

the spc preserves the vocal properties of the system is capable of on explaining high-level

behavior of the system like focal uh like um

depth of field or like a different phone rendering algorithms in different depths and it

is capable of extracting the high level camera parameters of interest and at the same

times if it keeps it simple but it is a high it has a high

school level and well there are some future works and they are actually ongoing works

related to this part and we are trying to investigate existing camera systems write ups

as one of the major points of the system thank you prior intention

well in this is taken consider them as a single optical element but there is

no limitation we can do the other weighting it depends on what you're expecting from

the model if the um for example you are going for precise result from the

model or if you're combining two systems and you want to keep precision as much

as possible while you're spending more on modeling the more complex systems this is a

trade-off and you will decide about how to work with this model but this is

explained basic behavior system and

oh

yeah and don't forget that we have and it's very sparse assumptions here you're working

only with um

geometrical optics and this is maybe event the worst i mean it's this is a

stronger something compared to what you are discussing

thank you