thank you

oh

a

all the slide

oh okay

so it's uh

meaning and

on

about fifteen slide

separating us from

go check

so

as all of it

uh this

uh work

that the uh a sparsity technique K in order to take the problem of uh

C reconstruction from a noisy data

it was down uh yeah or uh joint it was my piece two supervisors makes it out than then because

have ski

a snapshot of the resulting a technique

is a

at the uh the low part of the slide

yeah

after the

offline training is a to the previous uh work here that

chaining of the data is not offline

and then uh uh are we use a standard to construction all

uh algorithm such as uh filter the back projection

uh but the

before that a some uh noise reduction and that's sending on them in the sending domain is that perform

uh using a like S and the sparse representations

and finally we get there

finally

so that's a snapshot and uh i'm getting that the T

but this recall of some basic uh model of the in

so that that could

have a foot the big

we have uh

in this case it today

but no us slice

uh of a male had which uh

um i'm scanning with X

so each rate

is is um

has that some in shell uh an initial intensity of i zero four dollars per unit time

and uh as the ray travels to this so the body

it uh the the the four ones are sort the it's so that issue

oh the final number of of for we count

how just east estimate the line integrals was uh

attenuation map

and the this is actually there i don't transform to two-dimensional radon transform of the

of that initial map

and the what you're measuring uh are there

approximations of uh a line integrals up to uh look

well four

now in the a does it's can when when we want to keep this

number i as a little and i zero

keep it low

yeah we have a a deviation from the a correct ideal number of uh for tones

the be count

therefore the two measures measurements are

um model those so on the variables

instances of possible variables was that uh

uh a parameter that uh them that which is the bows the expectation of this variable

and their variance of

so the higher the

had had a that the time that the and number of that most count

uh the that's nice we

so we have here a

day they the trade between a

a a good image is want to get a and the

radiated and sick patients that we

the two will

have you free

try to improve they image them

oh

so

number of things can but can be done in order to reduce the does it's in the in this can

is first as to use algorithms which are which were

specifically designed to

acknowledges this a statistical model

uh so there is a

uh there is uh

a a map uh objective which is minimised

i directivity

and the usually is those algorithms are quite slow

uh despite by the fact that uh they do

yeah truly improve the performance of uh

such a basic uh are or assist few to by projection or

other similar pressed methods

which are currently

to really are used in there and the clinical ct "'em" is K

another way to reduce drastically there

uh the amount of eric's will deletion

he's not to eliminate the entire uh the entire had

but if you want to you look on at the small region on the in a a a a the

head

we can get the where was radiating on the on that region plus some

small additional amount of data

uh theoretically we mask ready at a i i'd to the whole had in order to

the recover you been one picks still because the there don't transform is not uh is not local but in

practice we can

yeah to by was much this radiation and you get a good image of some region

there are a

a special the algorithms to do that

we want to consider this scenario one where we do a scan that are had

and the you know it to improve the result of to the by projection

we perform some uh sign enormous asians some processing of a set run before uh

uh we can applied

so just uh

we called of your of there

images that we get along the way

uh from their ask and had we get a perfect sine gram

where are was uh

uh which is computed from

but on measurements was a log transform

then we have a a data dependent noise

which is at that because of a a little for don't count to the system

and from here we want to recover

the um

the the final image

okay now uh i'm we define the goal want to get to the method

that that via a attacking the problem

uh i see that the

some them them image was done to this that to the symbols here so this is uh

you uh almost people sign

um

you know that the you three i try to decompose natural signals in such a a

yeah yeah

yeah frames a to of let's or discrete con signs or the for transform

we have a rapid decay of the coefficient

so uh uh similar we want to

take a model which where we assume that only a few non-zero coefficients are needed to represent the signal well

here the signal for this case is the small uh uh quadratic page for the image

we put it the as a straight vector

and what that we want to represent that as uh the product

of uh matrix D

by the present a presentation i far

which she where the D is redundant so that i with a much longer

but we only use few nonzero uh a few if you comes from D

and the we both want a sparse vector L zero norm

measure the sparsity

but that the number of nonzero uh elements

and so that the residual at a

would be small

um based on this principle there is a noise reduction technique from uh for uh for standard the signal image

processing

developed by a lot and a on in uh and i that the something six

the define uh this objective function which contains three

basic turn

first time is uh is if you don't to term

uh which you compare is the noisy image F T today

was supposed to be and the

while which try to recover at

a second one uh uh

request that the

all the representations are a

uh

the J a the J runs over all the small patches in the an overlapping patches

and it J yeah operator extracts a small patch from have

and it here it is compared to

uh a sparse and coding

uh uh in in a form of the times i for G

so i the J is a sparse presentation

which she L zero norm is small

and also the residual

the difference a L two norm normal difference

is the

a required to be small

how this a

this equation is sold online after the

and noisy noisy image is a so the dictionary D

and this that the representations are

boast lower and only from the noise addiction

there is also a of and and uh uh

and of to procedure with training images

are you

so here a uh is what's a what's called a case the algorithm

we minimize for the second and that certain that

and third relevant turns for D and i'm five

uh there are two steps

to to do it

we optimize for are five and for the i directory

and to compute the odd us giving a a dictionary D

with perform what's called the sparse code

we want to find their the the sparse just are a

so

so uh under the condition that a threshold

but uh uh uh a different um

and norm of the residual

is below some threshold a epsilon J

this is done uh use and and that prior approximate and go algorithm

a pursuit algorithm such that uh a a a a uh orthogonal matching pursuit or P

or other uh you don't go

a second stage in the in the saturation is addiction of date

it does not relevant to might dog so i'll skip

finally any we have the both dictionary and they presentation

we can compute the that you image using the first and the sir

relevant terms for the image

the there is actually a closed form a to to solve this uh

equation so it is not quite quick

okay

and now this technique which is by the were quite efficient for noise reduction in image images

uh was used

but a couple of years ago by D appear sapiro

to uh to to

produce a reconstruction of an image from um

yeah from a C

so it is basically the same question is we so a minute ago

except the fidelity delta term

compare the noise assigning signing and you do that

and the image this sort image have which is uh

a transform by their at done transform

um

well first of should say that the uh this uh paper the shows some

very impressive to the results

a a a a on the yeah uh image which images to was uh region mention structure

and there a severe conditions of of of the partial data of the used very few

projections

uh but the

uh

in principle there are few

uh problems in this in this equations which we want

but to try and the

uh repair in a different uh set

so first of all know what is the

the the use the L two norm in the in the P do to to

which actually means that the assumption is that the noise

is a home a genie

a however do know that uh in the sinogram domain the main the noise does depend on the data

more more than would know exactly how does the we know the variance of the noise so

this can be used

and the um the second problem was that for that the term is

we actually want to get a good the

a a or uh a low error in the

and the image domain main is it images of

step of that we are

it decrease in the error or be in the signing gram the domain

and since the i don't transform is ill condition

a

this does not tell as much about what you image yeah

we are we're are seeing Q

the second problem is the

which is also a model but the also was that the we can not

surely obtain these coefficients you G

um S is the are related directly

to the um

to the uh thresholds i'd uh on J we don't really use

these new J's

but for each you for each batch we we need to know what is the expected that uh error energy

so that

to the put here the correct threshold

if we don't know the noise statistics and we do not know the noise statistics in the a ct image

domain

because there are uh after the reconstructions the noise is

quite complicated

we can not

compute these thresholds uh

a a quite right there are some estimation techniques but the the don't be a was uh uh a very

good result

so

we are trying to solve the problem

yeah shifting the different this their own where we do have a a few the back projection

not is the

the this concept does not use and any reconstruction the just of the in the a minimizer of this a

equation

and we do use and offline algorithm which you

does the provides learning was trained

so

uh

we

she from the image domain

but sparse coding was done or to the sending gram the mean

and a want to stick code the the pitch is of sending gram instead of the image

uh

so the

the panel to that the be are seeking for

is uh should should be in the image domain "'cause" are we can require for some

uh a nice properties um

of the image that they want to

we what we using an offline training stage which on the one hand to be yeah requires some training images

on a another hand

it makes the algorithm very fast because all the heavy work is done once the you just can or is

initiated

and then the in the reconstruction stage

it's

almost all most just just

almost as fast as the a F P P

so

the algorithm uses a set of uh a reference images high quality ct images

and the also uh corresponding glottal signing grams

should be applied

such that can be obtained the uh for instance use and using cadavers or find terms which can be scanned

without the

any any hesitation is to uh the radiation dose

okay so the algorithm goes as follows

we use the um

the the case we de algorithm to train

for for but very similar

equation is well before but the mix of extract the patches of sending gram and not of the image

except them from that is uh it is just the send question

uh equation uh except from the very

yeah important difference that we don't need a a different uh coefficients for uh for different presentations

here no it is it's

a weighted L two norm was the spatial matrix W able

detail on in uh okay and next slide

so this uh help us to normalize the noise over all the pitch

okay so we do and coding was a fixed the threshold Q

which is actually the size of page a number of pixels

uh a and the noise is normalized corresponding to so that this will work

okay now

and once once none of that we have a dictionary

a and the set of us uh that representations

is that the uh

help us though and it the produced in good sparse and coding for sending

uh one could stop here and use this dictionary to uh to improve the center of in the future

but think again that want the in the penalty to be in the image domain in here

we are uh we're not comparing to the

uh well actually not comparing to anything we just acquiring requiring that

for each patch that would be a good sparse code

so the second step is to take these representations

and the to make

uh the dictionary a better able

now

this uh

expression in in here is actually the or there is stored sending graph

i take this sparse encoded patches

the times i've the J

yeah return then to the my and a gram metrics

and finally the M inverse uh accounts for their but uh patches overlap

so this is the sign a gram

after i remove all the unnecessary

uh i necessary noise

and D is the

some field that is some construction algorithm

we wanted to be linear for this equations the be sort of the is but it can also be known

in near you have uh

i if this can be still so

so it can be the feel be projection or some other a

uh in your uh algorithm like that sounds like a the free inverse verse transform

i mean for the free uh algorithm for the

uh inverse to london

okay so a here D is uh is the linear function in terms of a of the data provided here

so of the

this L to more use the easily minimize for the using good

so you could you could writing

and

all this is an offline training which prepares as these two dictionaries D one and D two

and the in the in the second training stage we compares the reconstructed image

with the original one

and then

we use a a also a weighted to L two more

which she allows us to

demand specific things about the error that we are we are we are observing this is a construction error in

this

in this term

and um

and the matrix Q allows us to do some things which ill which i will also shown in a in

a couple of men

and meanwhile while how do i you use this the uh a train data

given then you noisy uh

uh uh a G till that

i idea compute the computer presentations using the sparse coding was the the same threshold Q

and the diction do you one

and then this presentations are used to encode

is the the a gram the restored sinogram a jury

okay this is the same for mil

finally when a have that uh the center gram i applied the

you the projection to to compute the the

now

uh what are the matrices W and matches

make matrix Q or was talking about

you know to build

uh

a good the note the not normalize is the the the error in the centre of the domain so that

all these the differences would the all the same

yes a yeah

a you need to buy you need one a energy i need to recall what are the statistics of the

of the noise in

so one using the their statistical model introduced the in the beginning

one can deduce

but that the the variance in each location of the sound of gram difference between the ideal sending a very

and the

measured one

is uh approximately uh a in verse to the true photon count

a and the and

we don't have it but we can

hey a use a good approximation by the mel for count

so when a

yeah multiply

by one over this the variance

i have a a uniform noise no in the in the uh

in the centre gone the so

is summing over the Q

yeah yeah as and the patch i

expect the energy to be just

Q

and therefore i can take this uh weight matrix as a diagonal matrix C a containing the initial photon count

in know to to uh produce a to do uh

correct sparsity decoding was a on thrash

and now as as uh a to the question of uh

what

kind of error measure i using

in fact to was this uh with this slide i'm more hoping for uh your help that the coming to

tell the something "'cause" the this is something come

i stumbled upon that the

wasn't quite considered in the literature

"'cause" not much of the supervised learning in the C reconstruction was done so far

i would like to think of uh and error measure

it which you can be designed using good a good quality reference image

which could we which which would help me to promote a a a good properties of the reconstructed image

for instance if i'm not looking at the had

i'm seeing a regions of bound and re in the regions of air

uh which are um

uh

not uh not necessary for my reconstruction if i'm interested only in the soft tissues there is a a great

to the dynamic range of C videos

he read is about on south than for five hundred

here it is a mine one thousand and i one plus minus third

so in the in the is in the special L by by design

i remove those regions

it can be seen

in this is a in the piecewise constant a

phantom here that

those regions are completely removed from the map then the

are not uh

i i don't i don't come then when a a a a a reduce my i all that the rest

of the image

also i would like to

to emphasise the uh that's the the the edges just of uh of small tissues

so that here all the you know an sis will be

uh

a good a good uh we'll build a good quality

so overall i E upper the i design such weighting map

and the maybe there are would other designs that can the proposed

and with the uh with respect to this map

i can obtain a visually better

uh images

uh um finishing in just one minute

all the just like the shows some uh um based some um

results

this is the piecewise uh

find tom is the um random L

a strong all about it

this is the reconstruction was standard fit the back projection

where it their uh of the parameters

uh a cut the frequency was optimal each known

uh this is compared to to our algorithm

and to double those you'll to but projection the result

which E which use

twice as much for tones

and the by the signal to noise ratio measurements

yeah white one can observe the these are

more is the same

also there are a some our results on clinical images this is a head section from a visible human uh

source

and this again is the egg F bp P algorithm which is

a little bit noisy

here i can uh recover the final fine details

much better

which she also use a a roughly just about what we can do was a double those

in the if to by projects

so in to summarise

we we can see that sparse presentations

can well already were you

uh for for computed tomography and the they can produce a very good results

white well taking not

much of the computational effort

and they can be easily incorporated in the

existing clinic kind as "'cause" it is just a matter of maybe replacing the soft

uh

so that's is it for now and thank you very much

recent to use the return my phone

and

question

no but

if a break dct

yeah oh

oh

you a balloon or to make you

X exposed to of a construct a and and and

just uh

and i still and to explain to uh

i mean cell has has a done

yeah

it's a reading can radiating a small neighborhood of the region of interest and only if few global projection so

that the low frequency can all source to

oh that there are good it's

one is a question you know come

you know i

and

yeah approach of just what comes to a are right

i of getting explain quite so it's not like

you ask if my approach can be used for for that uh race

a for uh construction from partial date

if if i only if i only measure that

a race sort through the region of interest if i can use the this technique

yeah

i i a i like should i yes because my technique is local i only i i i think local

in the sending them take a small patches and what our work on that

so uh even if a if the if i have a partial data by and their i

have some method of way

of of uh of dealing with a like and the extrapolation which is usually used

i can still uh

work on the available power data and the some uh

a a preprocessing in know though to get better or uh

uh but S now there before i does the

uh apply that that a algorithm so yeah you're right those things can be combined

okay okay you know mark

yeah we got all speak now more so on

point five

i