well

uh i will try to be fast because there is already select for you right

so

it's my pleasure to go down a introduce a this project through this is european

project uh unique properties again

uh the same the project is robust and save mobile cooperation of the number sixteens

and the abbreviation is a actually

uh so at the beginning and introduce the project itself in schools structure and then

demonstrators

oh then i will present our scientific development and achievements in this work or university

and finally i will describe how we integrate the our results into the project

the ministry

so the main project all its production of advanced robust and C cognitive uh reasoning

older than a simple practical robotic systems at reduced cost is most important thing

then designed and it'll reusable building blocks collected in the knowledge base

is the main idea of the project that

knowledge base with

two rolls independent building blocks that are reusable so then any other repeatable will be

cheaper in

present time

so based on the state-of-the-art that is

uh the project goals is to specified objects and develop in about solutions covering or

crucial robotics topics

uh such as however performance in addressing and i'm and perception

and modeling reasoning decision making or validation and testing

that and design of solutions are used to develop a knowledge base

uh i and tools for body development and standardised testing

a button on to manage those two approach

one of the key or what

as i said is development knowledge base but for knowledge based framework

so first two points

state-of-the-art records

and new features are used to feel this knowledge base

and used to design a methodology i'll work with this knowledge

the model

solutions will be then

integrated in the demonstrators from a different industrial partners

so there are examples of application domains

they are covered by industrial partners in this project

um

maybe i was giving you to show you the partners that is twenty seven partners

in position quite large amount

the project is focused on corporation this industrial partners so besides universities and national research

centres there is a high part oh

members of the project

uh our industrial representatives

so come back to this light

both and that area and probably both

they are not round by

um independently or incorporation are imperative solutions for accomplishment of surveillance or security inspection you

to use a more extensive address

examples might be more ignoring the borders the seashore all inspecting infrastructures some

about exactly was i'll talk in detail later because this is what we integrate our

solutions

industrial in manufacturing wouldn't parts and products the process maybe

like cutting the who surf a single assembly of the parts that are usually carried

by nancy on C and C O machines

in the current practise

oh

manual operation with machines is necessary because the machines are not able to of

oh to the mostly work with small

parts so the objective of this project is also solve these problems and then there

is a set of this remote applications that are can ski because we california talked

about the previous

presentation in detail

so in this structure where is a more research and it is it is university

there are driven by requirements of objects applications

but online as real-time sensor data processing is key requirement

our proposal cues and development novel methods and optimization of existing ones and acceleration

in our some

the methods are well separated might be so i

that is what we have developed some methods and the results were research O R

based on says a fusion usually for those tasks for robot localisation an object detection

of perception

in the robot localisation uh we develop a method that uses two

use an existing methods is you mapping this method that process a laser scans and

have very low precision but the model as you can see the environment is quite

or so

it's harder to get any knowledge from it

other methods

based on data from kinect so we show and that's data

can create a more informative model but the precision is so matter if use those

two methods actually we are able to get much we can see the precision of

a methods domain

and so

one remote so

final solution is um precise and the model was then higher quality

including detection

we experimented and realised several methods i will briefly introduced to them in that um

when we achieved a nice results that the uh published

well all those detectors are then used in our system and fused together well for

improving the robustness the final solution and the

more precise modeling of the environment

situation so one that is

a segmentation of the video signal um by tracking of local features

in comparison with existing methods that are used more on

precision and instability of the method as i said i focus on

speech and also

to work in an online manner

but usually off-line methods that exists use whole data to produce minus tracks and then

cost in a nice way we cannot use all data from

the future in on-line processing so this work this was uh problems that we need

to solve so actually come up with the

the method is able to run online and the real time with comparable precision but

with

many more times higher uh speech so

computational cost is very low

this method is able to segment one the robot moves

this method is able to segment objects that moves in the video stream relatively to

each other

so it doesn't necessarily mean that the T V to move but if there is

object far from some background we are able to see actually i shouldn't say we

this is computed at the picture this is a segmentation you don't know if it's

object one C

the second method i want to introduce is um

processing methods for this data um

the idea is based

that

in indoor scenes many objects or wonder so

this method segments longer well

segments object

again before used on a computational efficiency so here we have

also slightly better precision than existing methods but there are many times faster than the

others

this is the last uh and rough research what we do here for the project

this is for validation and verification part of the project reading something we should of

common uh features

um

usually one

robin systems are the lot and need to be very verify that there is need

of some simulation process

and usually we generate uh image actually

right image so what we are trying to do is to somehow similar in the

real situation so

we in

introducing to be nice um

say

distortions

right lower noise and many other

distortion

chromatic aberration

lens flare to basically a example

not perfect

so that was our research from scientific point of you know how we apply an

integrated or assumptions into a vectors we cooperate with i hope the at like a

different at that is um

uh

they have a covert in for gifts those allergies laser guided very close

just physically moving groups

and X is a link between different machines in big barrels moving the pilots and

they do it autonomously controlled by one

uh

central unit

i that controls oh

situation in the better so we have sold to task with that

one is also abundance this is crucial for that a lot

oh

solution because when two i T Vs meets somewhere because of one hundred you can

feel because of something like models and yeah anyway however than rgb space there and

in the corridor but i don't actually use

so we need to solve all the other actually might

and white this obstacle

the second

task is um

this application track probably means that those ics needs to get into that right lower

their the product and the track the problem is that

the solution

proposition solution for this uh framework

is based on there is an allegation positioning system that works only inside the white

house indulgently the rubber believes the barrels it was is position is as see what

it is so the goal is somehow

measure and perceive

the localisation actually out of the barrels inside the track the track which later there's

also some constraints

so this to make that we try to uh

so

this example this is serious frames rubber problems sensor is a camera

this is manually driven

example how

and we should be hey if i think it's J

and then we see what type do this

it's a person if it so that the robot it's politics

this an object because according to those information we can say if you can what

do not because of the safety reasons for example the person is not

ll to be avoided

the attendant actually or pilot is about the word in the harness and so

uh so

for this we design some structure i would represent what here and its input data

centre we use of actions in some sort

in all image domain or sensory that my

our experiments we use

R G and G other constraints bottom an environment we

right project and what you don't objects in the environment

alright

having

information about objects don't actually are report classify what type it is if it's a

danger situation just warning situation

a point of the information that rgb in making decide if he wants to avoid

object so that it is that we have a planning module that computes the past

and uh provide information twenty actually here we do the perception and more than the

a

i don't know any controlling on T V

so we only provide measurements and

that's a

oh analysis and proposals what actually my too much oh but you know any controlling

is it an example

just of the constraints in front of the T that there is something classified object

read position study danger or if it's just morning andrea

according to those information we use something like decision making use a if it's a

person it is too close to be muscle or the bill if it's other robot

and it's far all what we need to do is slow down so this is

people describing the decision making and remind you also computes

either that a if the avoidance

procedure is exceeded

so this is our asepsis team for obstacle avoidance cost

and the second that it is it is to get into the track

localise itself in the strong using different localisation method then used in our house so

you can see

i wanted visual information track is

to do we base or solution or laser measurements

so we design simply the model of that right and we measured roles right points

uh in that right

and we provide measurements

to the T V and T V then using a similar way as uh it's

postage it's uh it's using the localisation information

one house system

so

we have developed a methods

for sensory processing

i think the next imputation

we use an optimization method one

acceleration hardware

um but it somewhere else in which can directors oh

the results of those methods are was rendered looks like the use of two hours

we created experiment experimental probably form that see here our experiments

oh really

and most of our results are great to demonstrate

these cases

um thank you very much attention