0:00:13among everyone this is a of lossy compression on a lossy compression on a set for hyperspectral and all spectral
0:00:20images
0:00:21that's a joint work with and they are about the not a button you the university of C N
0:00:25so first give some motivation apart one but was a compression and the a specific problems
0:00:31i will describe the proposed compression algorithm then a provide some experimental results on
0:00:37i spectral and out for spectral images and
0:00:39finally draw some calm
0:00:42um
0:00:42i spectral images are a collection of pictures taken at seven different wave of the same scene
0:00:48quite a lot of data but when it comes to compress those data on were set to like we're faced
0:00:53with the problem that
0:00:54we don't have many computational resources to actually do
0:00:57impressions so
0:00:58first that the need is some low complexity um
0:01:01that's quite different from compression on the ground our car idle is
0:01:05the second but it thing is that unlike typical consumer digital cameras where there is a two dimensional
0:01:12detector are rate to
0:01:13picture
0:01:14for hyperspectral images to we just have one um
0:01:17a signal one dimensional array
0:01:19that takes one a line of the image with all it's
0:01:22a spectral channels all the different wavelength
0:01:25the i-th the lines of the images or the image are for by the motion of the aircraft or set
0:01:29light
0:01:30so it they are taken at time
0:01:32so that what use in table is that's we to the compression we don't have the whole image available but
0:01:38just a few lines of the image with all the spectral channel so
0:01:41we need to do compression using a a we will about one
0:01:46so that's do we want to compress
0:01:48good as we can so we want to have state-of-the-art compression efficiency
0:01:52and for hyperspectral applications we need to cover a range of bit which is relatively large
0:01:57to be from around zero point five two
0:01:59three three or four bit
0:02:00a pixel
0:02:01that should be compared with a a sensor be that is typically between twelve and
0:02:05uh sixteen bits per se
0:02:07so we have to cover
0:02:08a a little bit rate and high bit rate
0:02:11fashion and finally
0:02:12we need some error containment uh those uh
0:02:15compressed data packets are down only uh use a communication channel
0:02:19which is subject to occasional back loss and we don't want
0:02:22that's a the signal back to these disrupts the reconstruction of the whole
0:02:27so
0:02:28uh there are several of all buttons for performing compression of two dimensional images like
0:02:34and hyperspectral or on for spectral images
0:02:37a most popular approach uses three dimensional tile from coding
0:02:41for each sample jpeg two thousand we part two with that of course a multi component transformation
0:02:46where you can use
0:02:47you the spectral wavelet transform or an arbitrary spectral transformation
0:02:51and then the light jpeg two thousand on each of the transform
0:02:54a a spectral chan
0:02:56we we can use a way list we use the couple nonlinear transforms whatever
0:02:59this
0:03:00uh uh was very well at low rates of the problem of this approach
0:03:03the high complex
0:03:05complexity comes from the top for but you know or or from the coding and rate distortion
0:03:09use a shouldn't G
0:03:10a thousand where you had to
0:03:11uh
0:03:12do we
0:03:13i i coding all you links and then post compression rate distortion
0:03:16ascension so that
0:03:17mass too complex for on more
0:03:20that it does work well for
0:03:21archive
0:03:22hmmm
0:03:24the second thing is that if we want to use J two thousand well then a for compression we don't
0:03:29have a whole image available
0:03:31so our spatial transformations
0:03:33we have just to take that
0:03:34on a few lines of that image at a time
0:03:36which is that
0:03:37a possible with J
0:03:38a thousand using be line based transformation
0:03:41but then that start from them a shouldn't can be done in a global way more are
0:03:45you has to be done and a lot of weight just a few lines at that time and there's a
0:03:49big performance pen
0:03:51or that
0:03:52oh to be
0:03:52a a global optimal rate distortion
0:03:55age
0:03:56the you the also approach is to use of prediction techniques using three dimensional spatial and spectral prediction
0:04:02but action has been used for a time
0:04:05for the last this compression
0:04:07yeah was this compression used to to go for high quality applications where you want the maximum absolute error between
0:04:13the decoder in an original image to be a by
0:04:16a a user select value
0:04:18so that works very well i high rates
0:04:21i it doesn't work well as well the little bit rate
0:04:23and then uh
0:04:25a two dimensional prediction is usually a couple of colour this station an on how one i entropy coding
0:04:31have a a clear that we don't go below one
0:04:33pixel
0:04:34the short course
0:04:35a code word
0:04:36but how one on can provide is just one
0:04:38so
0:04:38uh that's a problem
0:04:40to go below one
0:04:42so uh what we propose for a compression is based on a a approach where use three dimensional
0:04:49spatial and spectral predictor or which keeps us a low complexity that need
0:04:53a compression
0:04:55but then we're faced with the problem of improving performance of a low bit
0:04:59existing schemes
0:05:00the just don't them
0:05:02so that we don't know how do them is she's we don't really need to perform you was let's compression
0:05:07move to
0:05:08truly lasting compression
0:05:09all the prediction residuals
0:05:11and so in order to do that we improve quantisation stage
0:05:16we don't use a simple scalar want either
0:05:18and we a rate distortion optimization the whole ski
0:05:23so
0:05:24uh
0:05:25this is how we do it
0:05:26uh the prediction stage you use
0:05:29therefore forms the prediction independently on sixteen by sixteen blocks of samples
0:05:34so the speech or shows uh and in which dividing in four sixteen by sixteen blocks
0:05:39and uh uh of for every we look at a all channels and the court look blocks in the the
0:05:45different spectral channels so looking
0:05:47what where the wavelet to mention
0:05:49we probably right
0:05:50from the curve at decoding block the bleeding spectral chan
0:05:54so this is quite different from the kind of prediction that is usually uh and blowing in hyperspectral image compression
0:06:00pixel by pixel
0:06:01and not a lot by block prediction
0:06:03but as will see is a loss as to a very efficient to start from
0:06:07i
0:06:08so essentially what we do is uh that are would need to in the next slide that we calculate a
0:06:13linear predictor that use the previous well
0:06:16but it be current law
0:06:17then we calculate
0:06:19the position the prediction residual and i think about this is that it provides a spatial error containment in that
0:06:25it some compressed where one is lot
0:06:28should
0:06:28that we have set the next blocks in the
0:06:31uh the the the the block in the same spatial position in the next wave lines but it not affect
0:06:36and now their spatial loss
0:06:38which
0:06:39so the prediction is actually quite seen but we got X
0:06:42the vector of samples of the current sixteen by six
0:06:45in lexical that ring
0:06:47are the vector of samples of the reference plot which is as i said
0:06:51but score okay at decoder block in previous spectral channel
0:06:55and then we calculate at least mean square predictor or with which is defined by two parameters are are fine
0:07:00and and
0:07:01and is the mean value of the current block and i'll wise at least means square parameter that
0:07:05pretty P
0:07:06current uh one
0:07:08probably principal
0:07:11alright
0:07:12so uh the first uh the other thing to move formula
0:07:17lossy compression quantisation
0:07:19uh uh
0:07:20technical near lossless compression of less quantisation
0:07:24which is almost a do not at high bit but far from optimal a low bit
0:07:29so for a bit rate compression its customary to use one with that some which creates long sequences of zeros
0:07:36that are back to the effectively by and three coders
0:07:39is is the optimal at lower rates
0:07:41not i rates
0:07:42and to find some that works well at all rates we decide to you
0:07:46a a kind of quantizer which is called uniform threshold quantizer or or you Q
0:07:50which is slightly more complex than E
0:07:53in form quantizer
0:07:54but dead zone
0:07:55but is the or not at all
0:07:58yeah O you do you
0:08:00is actually
0:08:01very simple
0:08:02it's
0:08:03i
0:08:04i i there in which every interval what decision you of rows are all the same size
0:08:09so calculating the court were is done much the same way
0:08:12classical
0:08:13colour you from quantizer
0:08:15the difference lies in the fact that that construction there is not taken as the midpoint point of the quantization
0:08:20interval but rather the sense
0:08:22as H
0:08:24so uh since we are a blank these to the prediction was walls we make use some so that the
0:08:29position was you those
0:08:30for a two dimensional
0:08:32a a two sided exponentially decreasing distribution lower plot distribution
0:08:37and so we calculate the
0:08:39uh
0:08:39actually actual construct and that used as a it's using these
0:08:42is to be
0:08:44and what happens is that if should look at this speech or
0:08:46so we you can see here are are are the different quantization interval
0:08:50the getting but of the point of the interval that you know a construction put by a a or for
0:08:56one times are
0:08:58a seems we use this uh we make this some some of distribution their out that the way that is
0:09:03of the prediction that or or more be then the high values
0:09:06so uh what we do is we add the collection term to the the construction but
0:09:11account for that
0:09:12so that by is that construction to works a little bit
0:09:15so much so as the uh quantisation indexes low so close to
0:09:20so that
0:09:21uh
0:09:22the or from your last
0:09:25and the second and was the most important one is rate distortion optimization
0:09:28and this is where really helps to use
0:09:31a square blocks for the prediction
0:09:33so the eight yeah here is
0:09:35uh essentially similar to the skip mode be the compression
0:09:39sometimes we find certain sixteen by sixteen or that can be pretty to really very well from the you reference
0:09:45blah
0:09:46and in that case we don't in the prediction a so we are other keep the encoding of the prediction
0:09:51or
0:09:51so that we save a lot it's in the process and just signal to the decoder
0:09:55that the decoded signal on this case we just the pretty
0:09:58that the decoder can
0:09:59a couple
0:10:01in particular
0:10:02we actually
0:10:03prediction
0:10:04we calculate the variance of the prediction and D
0:10:07and he can they're this variance for the special
0:10:11and if D X is the threshold that
0:10:13it means that the predictor is not a good enough for speech in the current block so we don't the
0:10:18classical and encoding of the prediction or
0:10:20i rice
0:10:21if
0:10:22the D is below a actual
0:10:24we simply to that and prediction parameters for a lot of the file but no prediction that so
0:10:29so that
0:10:30the the would will be a a to uh mean the petition parameter
0:10:34from the file
0:10:34calculate a pretty or in use the predictor as coding
0:10:41entropy coding all B
0:10:43but addition was used was is done using a a power of two call
0:10:47this is a a very typical become uh thing in uh compression for set like imaging
0:10:53because goal of two codes are much see there than any other
0:10:56a a cool especially arithmetic medical
0:10:59so
0:10:59uh there not was powerful but that that's of a good compromise
0:11:03performance and complexity
0:11:05and calculate
0:11:06the best coding parameter
0:11:08every sample
0:11:09based on the of magnitude of on my prediction residual of a window of thirty two
0:11:14so it's not done a lot
0:11:15well
0:11:16a sample by sent
0:11:19right
0:11:20so here are some results for the proposed algorithm
0:11:24a a tried that on a different images all show results for
0:11:28images from two different data
0:11:30well as i've is i have this is an images
0:11:32using spectrum there
0:11:34long
0:11:34which is flown on the aircraft
0:11:37and these images are
0:11:39had it they have two hundred fifty four spectral channels and the spatial signs
0:11:43six and an at times five hundred and twelve images
0:11:47um
0:11:47and the is a the right images as are where by this
0:11:51they have no calibration whatsoever no of corporations and not with really image
0:11:56it is taken by
0:11:57i've of is
0:11:58used for
0:11:59so that classification of
0:12:00locations and
0:12:01oh
0:12:03the second image isn't a in image from the years and
0:12:07some the which is operated by the not
0:12:10which is used for a static of studies
0:12:12these images have a a a a a much less piece spatially
0:12:16just one hundred and thirty five times ninety pixels
0:12:19but they have a a spectral channel signal can hundred one spec
0:12:25is a quality metric we look at the P peak signal noise ratio and and we compare the performance of
0:12:30the proposed algorithm with two other algorithms
0:12:33well i he's jpeg two thousand part with the spectral
0:12:37discrete wavelet transformation
0:12:39in this case we do not perform the three dimensional a rate distortion optimization we're not doing any
0:12:45line based comes from so that is also be shown for J
0:12:48a thousand or and i'm realistic and
0:12:50one would be actually run the set
0:12:53a sort of upper bound of the
0:12:54or of J
0:12:56and the second algorithm is near lossless compression
0:12:59use exactly the same predictor
0:13:01and it to be colder
0:13:03and not using the U G Q quantizer or nor the latest store
0:13:07just a or
0:13:09a by E D P C and we discard uniform quantization and entropy coding of the prediction was
0:13:16are the results
0:13:17the curve here
0:13:19july two thousand to the wavelet transformation
0:13:22and a continuous list compression algorithm
0:13:26it is no you assess compression is better and transform coding at high bit-rates and you can see that here
0:13:32will performance difference with respect to jpeg two thousand speech large over two bit per sample
0:13:38but that were try this this is not a as good uh so essentially for two reasons
0:13:42one is related to the fact that
0:13:44the like to the quantization step size
0:13:47and the were is the quality of the reference signal for the prediction so these points
0:13:51this brings the
0:13:52performance style
0:13:53and low bit
0:13:54but then and this i have is not able to achieve a rates be more one bit
0:13:58a pixel because we're using a
0:14:01got a call was mean got were like this one
0:14:04there's just no way to go below that
0:14:06the proposed algorithm seems to bring the best of both worlds here
0:14:10better then a job but two thousand and and are a bit rate is larger than then point three or
0:14:15do you point thirty five percent so
0:14:18you and that's bit rates the rate distortion optimization works pretty nice here
0:14:22and it's for for its performance tends to the performance of the yeah lost this compression at high rates and
0:14:28that's
0:14:29reasonable about because at high bit-rates
0:14:31that it is a shall never select the skip model for any block the image and a uniform threshold quantizer
0:14:37tense this colour one
0:14:39so
0:14:39the two algorithms essentially become pretty much the same
0:14:44we have similar
0:14:45you a results for the it's image
0:14:48sort of a yes
0:14:49do a two thousand as a little bit better or sometimes are performs by a small market
0:14:54proposed algorithm some but is not quite as good
0:14:57essentially
0:14:58you know a comparable performance
0:15:01and and that's pretty much the same you some near as compression is not a as low bit rates is
0:15:06become pretty high bit
0:15:08so this is still a a a a lot of jpeg two thousand for this image but jpeg two thousand
0:15:13you and recall
0:15:14is using a um the from a three dimensional very from
0:15:17as a so
0:15:18if we use the line based on from no one
0:15:22so
0:15:23um
0:15:25alright right so uh the this is an example of visual quality and this is just essentially sensually goes to
0:15:30show that we all the were using a block based pretty or we we don't have any hard
0:15:34here
0:15:35so this is a
0:15:36uh a patch from one the end of every as your original signal
0:15:41and this is not a construct signal by the proposed algorithm at zero one forty bit per piece so it's
0:15:47is
0:15:47you know one of the
0:15:48oh well as
0:15:49bit rates at the output but in can achieve
0:15:51and as can be seen that i mean the artifacts
0:15:53but no not not to science
0:15:55what's that
0:15:56the is is that a lot not the facts
0:15:58uh come from the quantisation the transform from the a some from the coupling of one position
0:16:03first transformation
0:16:04where i is in this case where using a block based pretty but the quantisation used and independently of the
0:16:09signal send
0:16:10pretty
0:16:11and what not
0:16:12so this is what would have a job but for example
0:16:15which creates you know a a lot
0:16:17here just which essentially keeps the text or
0:16:22alright right
0:16:23so
0:16:24uh
0:16:25you can can an uh the proposed is essentially
0:16:29uh a a a and you by for compression of
0:16:31a a hyperspectral image where we achieve lower complexity by use
0:16:34a prediction based approach
0:16:37which uh uh forms
0:16:38uh
0:16:39is known as or better than the state-of-the-art of the art three dimensional for coding with really feature
0:16:44distortion for optimization
0:16:45so that seems to be a nice way for for on what compression of set images
0:16:50complex in memory requirements are significantly lower than jpeg thousand
0:16:55a it's difficult to compare the complexity of different algorithms by top to sign this working on J two thousand
0:17:02and seems like the proposed approach to be like and man two
0:17:05fewer operations than J
0:17:07for a to the same
0:17:09on this
0:17:11uh but used in room for improvement
0:17:13we're not using any or i've made calling but that and certainly have the coding efficiency apartment coding
0:17:20what most are as by some margin
0:17:23we might use
0:17:24know and of the ring
0:17:25that is using for a reference uh spectrum channel for the prediction not just a three spectral channel
0:17:32a the spectral channel but use more correlated with the colour channel
0:17:36so this is especially on is that provides the nice performance
0:17:41uh this algorithm is people proposed to the european space efficiency is in is of a mission for the spectral
0:17:46image image or these it on the is i X amount for
0:17:50is going to fly to mars
0:17:51you
0:17:53that
0:18:01do have any questions
0:18:08i can you
0:18:11can make any comment in regards to um
0:18:14have the compression technique might affect processing that would occur after
0:18:18uh the images uh
0:18:20transmitted for example
0:18:22yeah end member extraction or some sort of classification task
0:18:24yeah
0:18:25uh that's something that wouldn't propose lossy compression to a remote sensing device the re scared about the potential negative
0:18:32effects of lossy compression so
0:18:34uh we we were that experiments in the past with that's
0:18:37and my feeling is that if the mean square error so you have several quality metrics cd can
0:18:42to measure that not just mean square error the maximum air
0:18:45spectral and will and and a lot of the matrix but
0:18:48my experience with that
0:18:50a if the mean square error is low in a a small have then everything we were very nice
0:18:54and it's uh in
0:18:56for for this kind of missions you definitely want to uh to keep the is got a sufficiently small
0:19:01uh for not a hyperspectral image but for a spectral uh says i'll um
0:19:06existing since C systems actually use a compression
0:19:09spurt five does use lossy compression
0:19:12at a bit of
0:19:13well i think a three per pixel from paid
0:19:16and the can a set of in just lossy compression so
0:19:20uh uh the government agencies which are using problem funding they don't really not care about a lot of compression
0:19:26but people
0:19:27that the private companies they they care of what's so my feeling is that
0:19:30compression is not a big deal
0:19:32uh are exceptions obviously if the mean square error is is small enough run problem uh comes for example from
0:19:38applications like a novelty detection
0:19:40where a a large at so the one on one single pixel can actually by the result of a normally
0:19:45detection so one has to be
0:19:46to have a in some ways but for classification
0:19:49my feeling is that a more less goes we the mean square error if the means got is low
0:19:57we have time for a a question
0:19:59quick
0:20:03"'cause" take a speaker