0:00:18thank you mister chairman for the introduction
0:00:21uh my name is because the company and i'm just assuming that it's and university
0:00:27and this is a beautiful campus assumes both so you can um
0:00:32well i wanna thank you for being here at the last the presentations and i'm
0:00:37going to talk about the sampling pattern you i'm going to introduce this model in
0:00:42a brief and clear way i hope to be useful at christmas from you and
0:00:47this has been a work together we um but clean vocals and what ensure strong
0:01:01first i would like to thank our funders which supported us to all this work
0:01:05and this will be the conference i'm going to talk about first i will give
0:01:09a little bit about the background the motivation of the work and um then i
0:01:14will introduce the model
0:01:17the spc model reviewing the school and the definitions and how we generate this pc
0:01:23applied to some known cases and we will see if it is good enough for
0:01:29us to extract features of interest and then i will compute the work together with
0:01:35some hints of the future work
0:01:38i'd like to mention that a group in a mixture then university's name to realistic
0:01:43three D and we are more or less interested in a three D information movies
0:01:50images or video and we kind of have the whole chain of a three D
0:01:55starting from the capturing processing transmission and the post processing and adaptation to the display
0:02:03and the viewing experience so we cover the whole chain but this will be about
0:02:08the capturing where and when we talk about capture we mostly mean cameras and parameters
0:02:14related to them so we will talk about cameras in this presentation and um when
0:02:21we say camera there that can display now configurations about them you're familiar with some
0:02:26of them porsche that all of the maybe the snr cameras or very available we
0:02:32know about them but for specific applications such as really capturing or different kind of
0:02:38capturing is we are going usually to um unconventional camera setups and it's important to
0:02:44be able to model them you know the level that the light and to be
0:02:47able to extract parameters with low complexity at the same time would level of description
0:02:54and for example here this setup is uh used in and is it is upon
0:03:01and um well they use this setup which is a camera here and the lens
0:03:06every yeah to capture a digital photography and this is a very famous camera array
0:03:13setup maybe we have signal before and this is a different camera setup you wanna
0:03:20should diversity and the famous like true or maybe it's assumed to be paying the
0:03:24slideshow camera which is a plenoptic camera so there are different configurations and uh things
0:03:29about them and we would like to have a model to be able to extract
0:03:36like what we have here um i want to say that there are different parameters
0:03:44related to one camera and um i haven't seen that kind of map or kind
0:03:50of measure to be able to put them in a scale and to be able
0:03:55to compare demand for example say camera one is better than camera to in that
0:04:00sense and if you do this to the camera and then these parameters are changed
0:04:03in this way and that kind of behavioural at the same time a descriptive information
0:04:09about the camera system and what is a camera i mean the camera system can
0:04:14do and how is this so different setups for the camera and these are usually
0:04:19parameters of interest for different applications and which i have pointed out here are more
0:04:25or less related to the focal properties of the camera and i will come to
0:04:28this point later for example for uh an application that angular resolution in a certain
0:04:35plane is more important for us maybe we can extract parameters using the model and
0:04:40see that camera to which is um shown by right column here is better for
0:04:46this application at this is space from the object or
0:04:51not able anyhow to compare camera one and two and come to a conclusion which
0:04:55one to choose for what much modification to apply to the camera to get a
0:04:59better result so maybe remarks about that work we are doing it is to keep
0:05:04the complexity of the model low at the same time to give it a high
0:05:07descriptive network which um can be used for extracting features or modeling the system
0:05:14yeah that is widely used and uh i've seen many to be instance here is
0:05:20for example there may be more models but these are too difficult examples one of
0:05:24the ray-based model which is um considering light as the light rays for names and
0:05:30the familiar with the two plane representation and having one point in each plane and
0:05:36the line connecting these two points are considered as the right and we call with
0:05:40this description and the method is usually approximated which we consider the light and on
0:05:46the right angle to the optical axis is a small enough to apply some uh
0:05:51approximations and it is widely used in different applications such as ray tracing photography microscopy
0:05:59or telescopes and um they are all familiar with this model a more comprehensive model
0:06:05more complex model is the wave optics model which is the using the light and
0:06:11electromagnetic wave and the method which is working with this electromagnetic waves or
0:06:17usually starting from maxwell equations and harmonic waves and fourier theory and it is able
0:06:23to uh explain more properties well at the expense of more complexity and all
0:06:31we are going to
0:06:34somehow interim something between these two models and the scope of the work we are
0:06:39uh doing is well oh
0:06:42only in it is that it will be a geometry based model and it will
0:06:47exclude anyhow the wave optics at least at the se stage and it applies to
0:06:52the optical capturing system which can be as i said conventional cameras or new setups
0:07:00so the motivation of the work is to have a framework for modeling complex capturing
0:07:05systems and we uh expect that this model providers kind of two words to be
0:07:10able to extract properties from the system and at the same time can be keeping
0:07:15in mind low complexity and the highest level of the model
0:07:20so basically what the model can be applied to different camera setups and we generate
0:07:26the spc using tools or mathematics for geometry we have and well i was trying
0:07:33to show something like the spc model which is the sampling pattern cube so and
0:07:38put a Q and put this might samples inside which are in the form of
0:07:41the light containers which are introduced to and from this model we are extracting features
0:07:49well this model is helpful for visualising purposes also and also describing the sampling behavior
0:07:54of the system
0:07:57there can be wide applications for these models first of all study and design of
0:08:02the capturing system can you want application and uh investigating system variations if we have
0:08:07a system and the at some parts with or with very the distances are properties
0:08:11of the system how it is reflected the sampling behavior of the system i
0:08:17the one i pointed out at the second or third the slide it is investigated
0:08:22inter camera records which one is better in what sense for this application or you
0:08:27have to compare it to different camera setups and one possible application can be adaptation
0:08:33of the post processing algorithms on which i will give some more about
0:08:40well in this sampling pattern cube i'm talking about it is uh there is a
0:08:44very simple idea and this is um originating from the light samples in this model
0:08:51light samples in this model are in the form in the form of light containers
0:08:55and uh we can put it along like tracing the rain model and what is
0:09:01special about like containers is that they are focus light there are formal focused like
0:09:07so there is a point which all by phrase or one of the light rays
0:09:12are passing to read point and we call this point at the position of the
0:09:16light container and there is an angular span associated with the lights container and in
0:09:21this representation we have four angles associated with it but this is a representation at
0:09:29this estate so a light container which we usually um once the bases in the
0:09:36slice coming next as the tip position and angular span and information is somehow the
0:09:43vocal properties of the system are somehow coded in this all samples and
0:09:51well the light
0:09:53containers will then produce the sampling pattern cube which i show but you hear and
0:10:00is a small like containers inside distributed inside the um Q and so we can
0:10:07say that the sampling pattern cube is a set of this slide container and i
0:10:11will show how to generate again how to use it so basically we all we
0:10:16had a camera and there is the space between the space in front of the
0:10:20camera and we tried to um provide information about how this space example by this
0:10:27camera using a light container elements
0:10:33and for ease of illustration we
0:10:37we have some simplifications here in that presentations i will the uh slice coming i
0:10:43will consider only one row of the on image sensor oh to be a starting
0:10:50weight and i will not go to the to an image sensor it would be
0:10:54too complicated to put it on a plane to show that i won't show like
0:10:58containers into the instead of three D representation by only go for xt representation and
0:11:04two angles the starting and finishing and the chip so this is a simplification we
0:11:09do for illustration purposes and
0:11:12there is one more thing uh in judea space instead of X Z if you
0:11:19have like containers like this predicted position and an angle starting to finish the angle
0:11:26transforming them to position angle representation and disposition angle representation is basically x-axis and the
0:11:34taxes here so that the X one like here the key position same we have
0:11:39a span we have an angular span in that uh access and
0:11:44instead of seeing cones like this we will have
0:11:49piece of lines like this and
0:11:52we should have in mind that is lines are
0:11:55horizontal and be presented means the like is in focus it means there is one
0:12:00positional information associated to the whole line which is the cheap of the light depending
0:12:08so we will face only positions like this horizontal lines in the sampling pattern cube
0:12:16and this one shows the simple idea behind how we generate the sampling pattern cube
0:12:22we basically start from the camera if we consider this part inside the camera and
0:12:27is the optical elements in the camera there can be only in main lens or
0:12:32a combination of different lens setups and this is the sensor plane
0:12:39we tried what we are going to do is to a form a light containers
0:12:44on the sensor plane based on physical properties of the sensibly light acceptance angle of
0:12:49the it's sensor so light acceptance angle and we define the first set up
0:12:57like containers then we backtracked this like containers into the C and you see my
0:13:02delight container is passing to an optical element that container transforms to a new container
0:13:09for example this one is transformed to this one so in new T position and
0:13:13angular information is associate that is um you meant to the light container and finally
0:13:19in an iterative process we a project all the initial light contenders to the three
0:13:26D scene in front of the camera and what we get is called the sampling
0:13:30pattern cube we will work with that later well this is a more formal presentation
0:13:36of the same process we have the flowchart we um
0:13:40actually form the like and painters and then go to this iterative no and process
0:13:47for a project all the like containers to the scene and finally we come up
0:13:51with the set of light containers in the form of the specimen
0:13:57well i will not go to the very detailed what to just give you some
0:14:01idea we have optical um elements like lenses or like apertures and so on you
0:14:09can refer to the paper for way more information but anyway for example here if
0:14:13like containers comes to an aperture for an aperture you know where the aperture you
0:14:19so we know which the plane the aperture temperatures located on and we know this
0:14:26the opening gary of the aperture and the lack container coming to this aperture well
0:14:31marcus cut out because it's not be in this kind of the aperture and part
0:14:35is staying here so we will have a new one that container like this cutting
0:14:41part which is not on inside aperture span and
0:14:48we will come to see point which is the new like container and we will
0:14:51go to the next iterated steps and for example for a lens
0:14:59if this is the lens plane and we know the focal properties of the lens
0:15:02and we know the lens equation then not contain upcoming this plane well you transform
0:15:08to a new one
0:15:10and you position and angular span is given to the new light container and we
0:15:16go to this process
0:15:19until all like containers are processed and no this is a very simple example the
0:15:24schematic them out
0:15:26a single lens system
0:15:28if this is the image sensor and this is the single lens system
0:15:33we have project information from the image plane to the three D space in front
0:15:38of the camera or here if Z welcomes us consider the plane of the main
0:15:45and see what the minus the use the plane of the image sensor then is
0:15:52lines showing the in focus light which is the form of the light container as
0:15:57it so before are i projected to another plane so you see that the angular
0:16:03span of the light that there is a big change as well as their positional
0:16:07information has been changed and now we have a new set of light containers in
0:16:12the form of the spc that we can extract properties of interest from
0:16:17well um we want to show that
0:16:21but like standing there is actually reflecting the behaviour of the system or in the
0:16:25better weren't the spc model in general is reflecting the behavior the sampling behavior of
0:16:30the system and to show that we are applying this explicit model to a known
0:16:36cases plenoptic camera
0:16:39in the uh can um conventional form and the focused plenoptic camera
0:16:45i hope you're familiar with system setups um i give some method with details about
0:16:51them but all well i think these are well known systems are system is uh
0:16:58containing the main lens lenslet carry an image sensor is placed behind the main the
0:17:04lenslet i and C two systems both have the same optical elements and the only
0:17:10difference between them is that distances between the a colour elements here we have
0:17:17uh it's between the lenslet area of the image sensor as the for the focal
0:17:22length of the lenslets
0:17:24here it's not the same we have space E and it is smaller than the
0:17:30and there is a relay system relation between the image plane which is here
0:17:36and the sensor image sensor
0:17:40and the main lens is pushed with forward so the spacings and basically different although
0:17:45the optical elements are the same and this is a slight difference gives them very
0:17:50different properties in terms of sampling and
0:17:53um high-level properties of the camera like resolution like um depth of field the local
0:18:02well these are just a bit more information about uh one of the camera
0:18:09i for the first split up to cover the conventional form and i would like
0:18:14to highlight that the spatial resolution is equal to the number of lenslets in this
0:18:18setup after we render images the spatial resolution of the images are equal to the
0:18:24um number of lenslets and there is a trade-off so if you raise the number
0:18:28of lenslets the spatial resolution scoring higher but angular resolution is going to be over
0:18:33so this is the main feature associated with pc i and i would come back
0:18:38to this point later
0:18:42and pc F structure
0:18:46which as a set has a relay system between the main lens um we commend
0:18:51as there is a relay system and um it can be considered as an aerial
0:18:56cameras inside the camera so the behaviour is more or less similar to the camera
0:19:01panning and there are multiple positional information for each angular information and the spatial resolution
0:19:09is which is a decoupled from the number of lenslets in this setup is the
0:19:14main difference between the two camera by and
0:19:19there also is that the numbers you have used for our simulations these are typical
0:19:25numbers and um there have been practical setups with this number of the basic thing
0:19:30i want to highlight here is that the only difference between none of the camera
0:19:34i and are in the spacings
0:19:39between the main lens and then on the other hand from the lens the area
0:19:42to the sensor and the rest of parameters are the same so what accuracies result
0:19:48from the difference in the
0:19:52spacings and these are typical
0:19:56in spc shapes we expect from the camera i and a lot of the camera
0:20:01that you see that we have a kind of area here sample and here we
0:20:05have very narrow area in the form of a line may be spread in the
0:20:11and uh the angular span we see here is where considerable and angular span of
0:20:16the samples are very small here and oh
0:20:20here is a closer look the same information we have this for plenoptic camera i
0:20:29and we can see in there
0:20:33this is the area sampled actually um the density is too high so it's just
0:20:38we have the shape here as a color but if you look from the inside
0:20:42uh in setup is um
0:20:44information we can see the sample like samples your in the form of the light
0:20:49containers and we can see multiple angular samples for a single position is a single
0:20:55position this is expenses so there are multiple angular samples forcing the position and this
0:21:02one is showing samples coming from behind one lenslet so the information behind one lenslet
0:21:10captured on the image sensor are the formal column
0:21:14in the case of plenoptic camera i
0:21:18and this is that we also you
0:21:21basically the same data and this is the case um plenoptic camera have
0:21:27and we see the sampling properties are different these are the multiple position samples for
0:21:33one single angular span and we can see the sample data by pixels behind one
0:21:40i hope to give you the impression that um
0:21:43spc is following the behaviour of the camera system so on the next slide is
0:21:50showing actually if we apply variations in the camera system is variations are reflected in
0:21:55the spc and this variation i've decided to be the variation of the lenslet pitch
0:21:59size in this case we can see the information we bill in its size of
0:22:06um interest haven't changes when the its size of the lenslets are battery and we
0:22:13can see the trade off between angular and spatial resolution in one of the camera
0:22:17i case lighting or not the camera is there is no trade off and um
0:22:22it is confirming that this spc model is falling the behaviour of the system
0:22:28but i do not talk about uh about the feature extractors are which is an
0:22:32ongoing work also we are more or less now focusing on the resolution parameters and
0:22:39this features extractors as a set can be informal focal plane field-of-view spatial resolution in
0:22:46different depth planes and angular resolution and that solution and different focal properties and um
0:22:52we hope to publish some um the results in this part and i want to
0:22:57conclude that the light field sampling behavior is reflected in this model and all since
0:23:04the spc preserves the vocal properties of the system is capable of on explaining high-level
0:23:10behavior of the system like focal uh like um
0:23:15depth of field or like a different phone rendering algorithms in different depths and it
0:23:22is capable of extracting the high level camera parameters of interest and at the same
0:23:27times if it keeps it simple but it is a high it has a high
0:23:32school level and well there are some future works and they are actually ongoing works
0:23:38related to this part and we are trying to investigate existing camera systems write ups
0:23:44as one of the major points of the system thank you prior intention
0:24:20well in this is taken consider them as a single optical element but there is
0:24:25no limitation we can do the other weighting it depends on what you're expecting from
0:24:30the model if the um for example you are going for precise result from the
0:24:37model or if you're combining two systems and you want to keep precision as much
0:24:41as possible while you're spending more on modeling the more complex systems this is a
0:24:47trade-off and you will decide about how to work with this model but this is
0:24:52explained basic behavior system and
0:25:05yeah and don't forget that we have and it's very sparse assumptions here you're working
0:25:10only with um
0:25:12geometrical optics and this is maybe event the worst i mean it's this is a
0:25:18stronger something compared to what you are discussing
0:25:31thank you