among everyone this is a of lossy compression on a lossy compression on a set for hyperspectral and all spectral

images

that's a joint work with and they are about the not a button you the university of C N

so first give some motivation apart one but was a compression and the a specific problems

i will describe the proposed compression algorithm then a provide some experimental results on

i spectral and out for spectral images and

finally draw some calm

um

i spectral images are a collection of pictures taken at seven different wave of the same scene

quite a lot of data but when it comes to compress those data on were set to like we're faced

with the problem that

we don't have many computational resources to actually do

impressions so

first that the need is some low complexity um

that's quite different from compression on the ground our car idle is

the second but it thing is that unlike typical consumer digital cameras where there is a two dimensional

detector are rate to

picture

for hyperspectral images to we just have one um

a signal one dimensional array

that takes one a line of the image with all it's

a spectral channels all the different wavelength

the i-th the lines of the images or the image are for by the motion of the aircraft or set

light

so it they are taken at time

so that what use in table is that's we to the compression we don't have the whole image available but

just a few lines of the image with all the spectral channel so

we need to do compression using a a we will about one

so that's do we want to compress

good as we can so we want to have state-of-the-art compression efficiency

and for hyperspectral applications we need to cover a range of bit which is relatively large

to be from around zero point five two

three three or four bit

a pixel

that should be compared with a a sensor be that is typically between twelve and

uh sixteen bits per se

so we have to cover

a a little bit rate and high bit rate

fashion and finally

we need some error containment uh those uh

compressed data packets are down only uh use a communication channel

which is subject to occasional back loss and we don't want

that's a the signal back to these disrupts the reconstruction of the whole

so

uh there are several of all buttons for performing compression of two dimensional images like

and hyperspectral or on for spectral images

a most popular approach uses three dimensional tile from coding

for each sample jpeg two thousand we part two with that of course a multi component transformation

where you can use

you the spectral wavelet transform or an arbitrary spectral transformation

and then the light jpeg two thousand on each of the transform

a a spectral chan

we we can use a way list we use the couple nonlinear transforms whatever

this

uh uh was very well at low rates of the problem of this approach

the high complex

complexity comes from the top for but you know or or from the coding and rate distortion

use a shouldn't G

a thousand where you had to

uh

do we

i i coding all you links and then post compression rate distortion

ascension so that

mass too complex for on more

that it does work well for

archive

hmmm

the second thing is that if we want to use J two thousand well then a for compression we don't

have a whole image available

so our spatial transformations

we have just to take that

on a few lines of that image at a time

which is that

a possible with J

a thousand using be line based transformation

but then that start from them a shouldn't can be done in a global way more are

you has to be done and a lot of weight just a few lines at that time and there's a

big performance pen

or that

oh to be

a a global optimal rate distortion

age

the you the also approach is to use of prediction techniques using three dimensional spatial and spectral prediction

but action has been used for a time

for the last this compression

yeah was this compression used to to go for high quality applications where you want the maximum absolute error between

the decoder in an original image to be a by

a a user select value

so that works very well i high rates

i it doesn't work well as well the little bit rate

and then uh

a two dimensional prediction is usually a couple of colour this station an on how one i entropy coding

have a a clear that we don't go below one

pixel

the short course

a code word

but how one on can provide is just one

so

uh that's a problem

to go below one

so uh what we propose for a compression is based on a a approach where use three dimensional

spatial and spectral predictor or which keeps us a low complexity that need

a compression

but then we're faced with the problem of improving performance of a low bit

existing schemes

the just don't them

so that we don't know how do them is she's we don't really need to perform you was let's compression

move to

truly lasting compression

all the prediction residuals

and so in order to do that we improve quantisation stage

we don't use a simple scalar want either

and we a rate distortion optimization the whole ski

so

uh

this is how we do it

uh the prediction stage you use

therefore forms the prediction independently on sixteen by sixteen blocks of samples

so the speech or shows uh and in which dividing in four sixteen by sixteen blocks

and uh uh of for every we look at a all channels and the court look blocks in the the

different spectral channels so looking

what where the wavelet to mention

we probably right

from the curve at decoding block the bleeding spectral chan

so this is quite different from the kind of prediction that is usually uh and blowing in hyperspectral image compression

pixel by pixel

and not a lot by block prediction

but as will see is a loss as to a very efficient to start from

i

so essentially what we do is uh that are would need to in the next slide that we calculate a

linear predictor that use the previous well

but it be current law

then we calculate

the position the prediction residual and i think about this is that it provides a spatial error containment in that

it some compressed where one is lot

should

that we have set the next blocks in the

uh the the the the block in the same spatial position in the next wave lines but it not affect

and now their spatial loss

which

so the prediction is actually quite seen but we got X

the vector of samples of the current sixteen by six

in lexical that ring

are the vector of samples of the reference plot which is as i said

but score okay at decoder block in previous spectral channel

and then we calculate at least mean square predictor or with which is defined by two parameters are are fine

and and

and is the mean value of the current block and i'll wise at least means square parameter that

pretty P

current uh one

probably principal

alright

so uh the first uh the other thing to move formula

lossy compression quantisation

uh uh

technical near lossless compression of less quantisation

which is almost a do not at high bit but far from optimal a low bit

so for a bit rate compression its customary to use one with that some which creates long sequences of zeros

that are back to the effectively by and three coders

is is the optimal at lower rates

not i rates

and to find some that works well at all rates we decide to you

a a kind of quantizer which is called uniform threshold quantizer or or you Q

which is slightly more complex than E

in form quantizer

but dead zone

but is the or not at all

yeah O you do you

is actually

very simple

it's

i

i i there in which every interval what decision you of rows are all the same size

so calculating the court were is done much the same way

classical

colour you from quantizer

the difference lies in the fact that that construction there is not taken as the midpoint point of the quantization

interval but rather the sense

as H

so uh since we are a blank these to the prediction was walls we make use some so that the

position was you those

for a two dimensional

a a two sided exponentially decreasing distribution lower plot distribution

and so we calculate the

uh

actually actual construct and that used as a it's using these

is to be

and what happens is that if should look at this speech or

so we you can see here are are are the different quantization interval

the getting but of the point of the interval that you know a construction put by a a or for

one times are

a seems we use this uh we make this some some of distribution their out that the way that is

of the prediction that or or more be then the high values

so uh what we do is we add the collection term to the the construction but

account for that

so that by is that construction to works a little bit

so much so as the uh quantisation indexes low so close to

so that

uh

the or from your last

and the second and was the most important one is rate distortion optimization

and this is where really helps to use

a square blocks for the prediction

so the eight yeah here is

uh essentially similar to the skip mode be the compression

sometimes we find certain sixteen by sixteen or that can be pretty to really very well from the you reference

blah

and in that case we don't in the prediction a so we are other keep the encoding of the prediction

or

so that we save a lot it's in the process and just signal to the decoder

that the decoded signal on this case we just the pretty

that the decoder can

a couple

in particular

we actually

prediction

we calculate the variance of the prediction and D

and he can they're this variance for the special

and if D X is the threshold that

it means that the predictor is not a good enough for speech in the current block so we don't the

classical and encoding of the prediction or

i rice

if

the D is below a actual

we simply to that and prediction parameters for a lot of the file but no prediction that so

so that

the the would will be a a to uh mean the petition parameter

from the file

calculate a pretty or in use the predictor as coding

entropy coding all B

but addition was used was is done using a a power of two call

this is a a very typical become uh thing in uh compression for set like imaging

because goal of two codes are much see there than any other

a a cool especially arithmetic medical

so

uh there not was powerful but that that's of a good compromise

performance and complexity

and calculate

the best coding parameter

every sample

based on the of magnitude of on my prediction residual of a window of thirty two

so it's not done a lot

well

a sample by sent

right

so here are some results for the proposed algorithm

a a tried that on a different images all show results for

images from two different data

well as i've is i have this is an images

using spectrum there

long

which is flown on the aircraft

and these images are

had it they have two hundred fifty four spectral channels and the spatial signs

six and an at times five hundred and twelve images

um

and the is a the right images as are where by this

they have no calibration whatsoever no of corporations and not with really image

it is taken by

i've of is

used for

so that classification of

locations and

oh

the second image isn't a in image from the years and

some the which is operated by the not

which is used for a static of studies

these images have a a a a a much less piece spatially

just one hundred and thirty five times ninety pixels

but they have a a spectral channel signal can hundred one spec

is a quality metric we look at the P peak signal noise ratio and and we compare the performance of

the proposed algorithm with two other algorithms

well i he's jpeg two thousand part with the spectral

discrete wavelet transformation

in this case we do not perform the three dimensional a rate distortion optimization we're not doing any

line based comes from so that is also be shown for J

a thousand or and i'm realistic and

one would be actually run the set

a sort of upper bound of the

or of J

and the second algorithm is near lossless compression

use exactly the same predictor

and it to be colder

and not using the U G Q quantizer or nor the latest store

just a or

a by E D P C and we discard uniform quantization and entropy coding of the prediction was

are the results

the curve here

july two thousand to the wavelet transformation

and a continuous list compression algorithm

it is no you assess compression is better and transform coding at high bit-rates and you can see that here

will performance difference with respect to jpeg two thousand speech large over two bit per sample

but that were try this this is not a as good uh so essentially for two reasons

one is related to the fact that

the like to the quantization step size

and the were is the quality of the reference signal for the prediction so these points

this brings the

performance style

and low bit

but then and this i have is not able to achieve a rates be more one bit

a pixel because we're using a

got a call was mean got were like this one

there's just no way to go below that

the proposed algorithm seems to bring the best of both worlds here

better then a job but two thousand and and are a bit rate is larger than then point three or

do you point thirty five percent so

you and that's bit rates the rate distortion optimization works pretty nice here

and it's for for its performance tends to the performance of the yeah lost this compression at high rates and

that's

reasonable about because at high bit-rates

that it is a shall never select the skip model for any block the image and a uniform threshold quantizer

tense this colour one

so

the two algorithms essentially become pretty much the same

we have similar

you a results for the it's image

sort of a yes

do a two thousand as a little bit better or sometimes are performs by a small market

proposed algorithm some but is not quite as good

essentially

you know a comparable performance

and and that's pretty much the same you some near as compression is not a as low bit rates is

become pretty high bit

so this is still a a a a lot of jpeg two thousand for this image but jpeg two thousand

you and recall

is using a um the from a three dimensional very from

as a so

if we use the line based on from no one

so

um

alright right so uh the this is an example of visual quality and this is just essentially sensually goes to

show that we all the were using a block based pretty or we we don't have any hard

here

so this is a

uh a patch from one the end of every as your original signal

and this is not a construct signal by the proposed algorithm at zero one forty bit per piece so it's

is

you know one of the

oh well as

bit rates at the output but in can achieve

and as can be seen that i mean the artifacts

but no not not to science

what's that

the is is that a lot not the facts

uh come from the quantisation the transform from the a some from the coupling of one position

first transformation

where i is in this case where using a block based pretty but the quantisation used and independently of the

signal send

pretty

and what not

so this is what would have a job but for example

which creates you know a a lot

here just which essentially keeps the text or

alright right

so

uh

you can can an uh the proposed is essentially

uh a a a and you by for compression of

a a hyperspectral image where we achieve lower complexity by use

a prediction based approach

which uh uh forms

uh

is known as or better than the state-of-the-art of the art three dimensional for coding with really feature

distortion for optimization

so that seems to be a nice way for for on what compression of set images

complex in memory requirements are significantly lower than jpeg thousand

a it's difficult to compare the complexity of different algorithms by top to sign this working on J two thousand

and seems like the proposed approach to be like and man two

fewer operations than J

for a to the same

on this

uh but used in room for improvement

we're not using any or i've made calling but that and certainly have the coding efficiency apartment coding

what most are as by some margin

we might use

know and of the ring

that is using for a reference uh spectrum channel for the prediction not just a three spectral channel

a the spectral channel but use more correlated with the colour channel

so this is especially on is that provides the nice performance

uh this algorithm is people proposed to the european space efficiency is in is of a mission for the spectral

image image or these it on the is i X amount for

is going to fly to mars

you

that

do have any questions

i can you

can make any comment in regards to um

have the compression technique might affect processing that would occur after

uh the images uh

transmitted for example

yeah end member extraction or some sort of classification task

yeah

uh that's something that wouldn't propose lossy compression to a remote sensing device the re scared about the potential negative

effects of lossy compression so

uh we we were that experiments in the past with that's

and my feeling is that if the mean square error so you have several quality metrics cd can

to measure that not just mean square error the maximum air

spectral and will and and a lot of the matrix but

my experience with that

a if the mean square error is low in a a small have then everything we were very nice

and it's uh in

for for this kind of missions you definitely want to uh to keep the is got a sufficiently small

uh for not a hyperspectral image but for a spectral uh says i'll um

existing since C systems actually use a compression

spurt five does use lossy compression

at a bit of

well i think a three per pixel from paid

and the can a set of in just lossy compression so

uh uh the government agencies which are using problem funding they don't really not care about a lot of compression

but people

that the private companies they they care of what's so my feeling is that

compression is not a big deal

uh are exceptions obviously if the mean square error is is small enough run problem uh comes for example from

applications like a novelty detection

where a a large at so the one on one single pixel can actually by the result of a normally

detection so one has to be

to have a in some ways but for classification

my feeling is that a more less goes we the mean square error if the means got is low

we have time for a a question

quick

"'cause" take a speaker