0:00:14 | a work |
---|---|

0:00:15 | a a a a a a about |

0:00:16 | uh present road constraints |

0:00:18 | a following what was little up like to sign from data hiding systems |

0:00:22 | this work by but a calm an on the press |

0:00:24 | a and i don't them could be here they |

0:00:27 | so while the my best to try to transfer to you |

0:00:29 | the the what that the uh |

0:00:32 | so this is the |

0:00:33 | a line of the presentation |

0:00:35 | yeah first will see |

0:00:37 | a |

0:00:38 | a brief introduction about day what long |

0:00:40 | for such a constraint |

0:00:42 | then we define all the perceptual constraints and the constraints coming from uh robustness considerations of four |

0:00:48 | i a side from data hiding |

0:00:50 | and we will uh derive |

0:00:53 | the |

0:00:54 | the equation corresponding to a a and betting that will follow |

0:00:58 | but a kind a generalized logarithmic the M |

0:01:00 | then we'll see the analysis of embedding bedding power embedding distortion power |

0:01:04 | and probably give the coding are or and a some result |

0:01:08 | or for data hiding in the in the last two years a lot of attention has been paid |

0:01:12 | two issues she's like a best at like to minimize the probably give the coding or or |

0:01:16 | but some mice and the robustness against several facts |

0:01:19 | a a low in the that detect but in case of this steak on a graphic a context |

0:01:24 | so keeping the covert channel uh uh |

0:01:26 | hidden |

0:01:27 | and also to security |

0:01:29 | but perceptual in |

0:01:31 | has been usually on the value |

0:01:33 | so a number of works present that |

0:01:35 | a dealing with perceptual impact is much lower than the move words to some four |

0:01:39 | any of these all issue |

0:01:42 | and of all the characteristics of the human visual system that you can we can think of |

0:01:46 | it this work is for used on what was low as law is uh a a a the rule |

0:01:51 | that's |

0:01:52 | a relates |

0:01:54 | the how big the money to the fussing a least |

0:01:56 | with |

0:01:57 | the money you of the distortion we can impose an that signal |

0:02:00 | more that to be perceptually notes |

0:02:03 | the intuition behind it what was lot |

0:02:05 | is that you with five if we have a a one kilogram one possible |

0:02:09 | i if we change two hundred grams |

0:02:11 | then this change will be noted |

0:02:13 | but if the pasta is fifty kilograms and that change should be hardly noticeable |

0:02:17 | so |

0:02:18 | the perceptual in |

0:02:20 | oh for modification |

0:02:21 | to a a low money to signal is not the same as the perceptual impact to that same |

0:02:26 | a a modification |

0:02:28 | to a very high my median see signal |

0:02:30 | so |

0:02:32 | what what was lost says |

0:02:33 | is that that modification that the signal must on the goal |

0:02:37 | more the to produce these smallest is not so we'll difference |

0:02:39 | is proportional to the magnitude of signal it's itself |

0:02:42 | so the higher the money do of a signal |

0:02:44 | the high of a modification that we can apply to that signal in order two |

0:02:48 | a a a half it conceals so not be perceptually not so |

0:02:53 | so what was is implicitly used |

0:02:55 | by multiplicative straight spectral methods |

0:02:57 | in which we have to the minute you of the watermark the do we that we are line at each |

0:03:01 | what with efficient |

0:03:02 | is proportional to the magnitude of signal what of the whole coefficient |

0:03:06 | in which we embedding it |

0:03:07 | but |

0:03:08 | they are outperformed by setting form data hiding scheme |

0:03:12 | so |

0:03:12 | the question that we can ask at this point is |

0:03:15 | can we exploit |

0:03:16 | the is the perceptual constraints am by was lot |

0:03:19 | in site form data hiding insist |

0:03:21 | and S is yes we can |

0:03:23 | and in this work |

0:03:24 | uh those perceptual trains |

0:03:26 | are |

0:03:27 | a |

0:03:28 | are are uh compared dies in the |

0:03:30 | use of what was a and that what was little is used |

0:03:33 | two |

0:03:34 | a derive a generalized version of a logarithmic embedding is scheme |

0:03:38 | of of side the of anything data hiding |

0:03:41 | and we will see several choices for embedding and decoding regions |

0:03:45 | as a function of the parameters of this |

0:03:48 | so first the four will the find the constraints |

0:03:50 | i mean from what was little and coming from a robustness constraint |

0:03:54 | and from signing from data hiding |

0:03:55 | that define the embedding equation |

0:03:58 | of the a spread spectrum we have to the a perceptual constraint |

0:04:02 | is that the man do use of the what are my each watermark efficient |

0:04:05 | is bounded by the i you to of the host signal |

0:04:10 | of of money to a host signal |

0:04:12 | times the money to of the spreading of the corresponding spreading sequence coefficient |

0:04:15 | and also this coefficient at uh that controls watermark trying |

0:04:20 | it will change |

0:04:21 | these a constraint |

0:04:23 | by uh |

0:04:24 | double bound |

0:04:26 | in which we with a here the that excite is possibly if for negative isn't it would be but lee |

0:04:30 | another those |

0:04:31 | and we'll but upper bound |

0:04:33 | a |

0:04:34 | the watermark coefficient by it that two times six i word that to is positive |

0:04:38 | and lower bounded by at the one times X i where that one |

0:04:41 | is negative |

0:04:43 | from site formant batting we get the constraint that we have to one types of of depending on the human |

0:04:48 | bit |

0:04:49 | we hear considering just a binary embedding breed would be completely and that was for any of its size |

0:04:56 | i'm from robustness we get to constraints |

0:04:59 | first that the was then these you what to be a minimum |

0:05:02 | if we get a uh a a a a a a given |

0:05:05 | a distortion power |

0:05:07 | then we a have to minimize the centrist density for that distortion power for their and many distortion power |

0:05:13 | and we have also that the total code we can be determined |

0:05:16 | by knowing any if its called words so if we we've we no one code work |

0:05:20 | for embedding a C don't then we know the whole codebook for embedding as C and also the whole code |

0:05:24 | look from embedding |

0:05:25 | a one |

0:05:27 | it from all these constraints |

0:05:28 | the embedding equation that we can derive |

0:05:30 | is this one |

0:05:31 | so this |

0:05:33 | a this this letting question really a some those a do the modulation we have here that |

0:05:38 | the vector coefficient |

0:05:39 | and also the embedded be it |

0:05:41 | but |

0:05:42 | it it is a in the logarithmic domain so it's a low but nick to the modulation |

0:05:46 | and a also these stone C here that makes seat |

0:05:49 | a kind of generalized slow rhythmic the em and will seen the following slides will do C means and what |

0:05:55 | is its function |

0:05:56 | all this the |

0:05:57 | the block diagram of the better better in which we to we take the input signal we get rid of |

0:06:02 | the sign |

0:06:03 | we |

0:06:04 | a a go to the logarithmic domain we have a beast and scene and we apply normal of the him |

0:06:08 | with either they're set a sequence the and with the input and letting sequence P |

0:06:12 | and then we get back to the not real domain and recall for the sign of thing but single |

0:06:17 | in this case the parameters C |

0:06:19 | defined as the shape |

0:06:21 | of the quantization region |

0:06:23 | in this case it used as colour |

0:06:24 | the boundaries of the quantization region |

0:06:27 | and C is bounded by zero and the that |

0:06:29 | the idea is that the either |

0:06:31 | and we can also be fine the quantization step then to delta i find a not real domain so the |

0:06:36 | equivalence |

0:06:37 | in the logarithmic domain would become a |

0:06:39 | that is the exponential of that |

0:06:42 | C is defined as the |

0:06:44 | low body from of one at that two or these so that to is the bound of we had before |

0:06:48 | for the |

0:06:49 | a a a a a a minute you've of a watermark efficient |

0:06:53 | and the C use and what makes this a a generalized logarithmic the M |

0:06:58 | we'll see that different choices of C if a different choices of the boundaries for the quantization regions |

0:07:03 | and |

0:07:04 | if we chose for example |

0:07:06 | a a if we define here |

0:07:09 | it a to the choice of that that two we determine the choice of C |

0:07:12 | so if we change it at two and we take it the two we close to come a minus one |

0:07:16 | divided by come up plus one |

0:07:18 | then the quantization boundaries would be a the middle of the center so i the arithmetic mean of the it |

0:07:23 | and this is the same codebook for multiplicative T M |

0:07:26 | if we chose at that to as the square root of come mine as one thing would have |

0:07:30 | the centroid of the geometric mean value of the quantization interval and this is equivalent to use in logarithmic the |

0:07:35 | M |

0:07:36 | and and not the choice is that the two could be able to come moments one to of a two |

0:07:40 | and in that case would would have the center it at the arithmetic mean value of the quantization into vol |

0:07:44 | all these three choices have come on that if we take the first order taylor approximation of at two |

0:07:49 | of it is getting of it that two |

0:07:52 | as a function of them uh |

0:07:53 | then all all three have the same first order taylor approximation |

0:07:57 | that means that if we are in a low distortion regime one dealt approach zero and therefore them approach just |

0:08:02 | one |

0:08:03 | then all of them are asymptotically equivalent |

0:08:07 | to see graphically if the |

0:08:09 | a a yellow |

0:08:10 | bars |

0:08:11 | we present the centroids |

0:08:12 | for the first choice we would have |

0:08:14 | but the quantization boundaries located at the middle of the |

0:08:18 | sent rights so what your from a to of two consecutive centroids rates |

0:08:21 | for the second choice we get this entry look at it at the geometric mean |

0:08:25 | all of the two boundaries |

0:08:28 | and and a chip third choice we have the center look at that the arithmetic mean |

0:08:32 | of the two boundaries |

0:08:34 | so for the coding |

0:08:35 | these choice of C can be taken for encoding just two |

0:08:39 | a a defined one quantization the embedding a one decision boundaries |

0:08:43 | and forty recording |

0:08:44 | for defining |

0:08:45 | B |

0:08:47 | skip |

0:08:48 | the uh the coding on that the column region boundaries |

0:08:51 | so the choice of C at the embedder and the choice of see that we will call C prime the |

0:08:56 | be colour doesn't have to be the same |

0:09:00 | a was you had |

0:09:00 | a a the choice of the embedder will be |

0:09:03 | drive them by the minimization of the embedding distortion power |

0:09:07 | and the choice of C prime at the decoder will be driving |

0:09:10 | by the minimization of the a the coding probability |

0:09:13 | yeah or four |

0:09:16 | so here we have a a a a a formula for the embedding distortion power as a function of the |

0:09:20 | host |

0:09:21 | a distribution |

0:09:23 | if we take the assumption of a little distortion dredging |

0:09:26 | then this the this equation gets independent of these score distribution and we get these approximation |

0:09:32 | and this formula is |

0:09:33 | a a symmetric with respect to see was to that that divided by two |

0:09:38 | and that L to the it but to happens to be the minimum of these |

0:09:42 | embedding distortion |

0:09:43 | then the function rows |

0:09:46 | yeah i to the boundaries of the domain of C |

0:09:49 | uh reaching the maxim at C plus zero and see posted |

0:09:53 | and |

0:09:54 | for what can be called the high distortion regime so for when that is pretty big |

0:09:59 | we have these approximation this not a realistic approximation of course because we will never be |

0:10:03 | and high distortion reading |

0:10:05 | but |

0:10:06 | a a a it serves to the propose of checking how much we can do urge from the low distortion |

0:10:12 | regime approximation |

0:10:13 | when this assumption is not really true |

0:10:17 | so if we put a plot the this the equations |

0:10:21 | and never we get this representation |

0:10:22 | this solid lines |

0:10:24 | represent |

0:10:25 | the |

0:10:25 | experimental results |

0:10:28 | the dashed lines represent |

0:10:30 | the approximation for a little distortion in here we have that that's so here this side we are in low |

0:10:35 | distortion volume |

0:10:36 | we see that the approximation is really good |

0:10:38 | and be sold the lines the dashed but for percent approximation for the high distortion reading |

0:10:43 | but is to which the experimental results tend when we are in this |

0:10:47 | side of the plot |

0:10:48 | will were present and the document to watermark ratio so it is the inverse to the embedding distortion |

0:10:54 | and we can see have that |

0:10:56 | a if we choose elements |

0:10:58 | points that are symmetric with respect to does that divide by two |

0:11:02 | for C |

0:11:03 | then we get exactly the same approximation you to the symmetry of the of the formula that we have seen |

0:11:07 | the previous slide |

0:11:08 | and we get the maxim a the maximum for the document watermark ratio for the choice of C paul |

0:11:14 | does that of to by to as predict |

0:11:18 | if we go for the probably of the coding of or |

0:11:21 | and if we take a minimum distance D colour that of course is not the |

0:11:25 | a optimal decoder but it serves the purpose of having |

0:11:28 | um an analytic expression a closed form expression for this probably four |

0:11:33 | then |

0:11:33 | in the low distortion in |

0:11:35 | we can come out with this approximation that depends on the choice of C at the embedder and the choice |

0:11:40 | of C prime of the decoder |

0:11:42 | yeah we see that |

0:11:43 | this formula is minimised |

0:11:45 | one C approach is that to |

0:11:47 | and once you prime approach is that the it by four |

0:11:50 | so we can see here that we have a trade-off and C |

0:11:53 | the choice of C at the embedder for me my sin the embedding power is not the same as the |

0:11:58 | choice of C |

0:11:59 | given for to mice in the decoding of or or but is this one so |

0:12:03 | one is that that a very but to and other is that |

0:12:06 | in any case |

0:12:08 | in you to the symmetry of the embedding distortion formula |

0:12:12 | in a if we are in low distortion writing the team of C would be in the second half of |

0:12:16 | the form that so |

0:12:17 | in between the the E to a two and that the |

0:12:20 | if this is not true then |

0:12:22 | a is no longer holds and C can be chosen at any point between zero and that that |

0:12:27 | a it's worth multi signals in this formula that here we have that this from less ill defined for C |

0:12:33 | prime ten into a zero or to does by the by two the points in which the sign |

0:12:39 | a a gets a all |

0:12:40 | so for that point approximation we would be worse |

0:12:43 | so we we plot the formula |

0:12:45 | then we get |

0:12:46 | uh |

0:12:49 | here the |

0:12:50 | uh a continuous lines so the theoretical approximation we get in the previous slide |

0:12:54 | and the dots |

0:12:55 | represents the a experimental results |

0:12:58 | we see that the board C prime |

0:13:00 | it was that that but it by for a we get the minimum of |

0:13:03 | the probability of the coding of or |

0:13:05 | for values symmetric |

0:13:07 | but a with respect |

0:13:08 | to that they but it by for we get exactly the same |

0:13:12 | uh |

0:13:12 | the same approximation of the previous formula |

0:13:15 | we see here |

0:13:16 | here here that these approximations not |

0:13:19 | it a very good at this point you to the ill definition of a formula |

0:13:23 | and anyway we see again that for the choices of C |

0:13:27 | as we increase E and we approach to that a |

0:13:30 | then we get a lower probably to the coding or |

0:13:33 | as expected |

0:13:35 | so checking which is the the robustness of this method against |

0:13:39 | a a different kinds of that's |

0:13:41 | and comparing it to other side in hiding methods |

0:13:45 | if which are we we choose uh jpeg peg attack |

0:13:48 | we can see |

0:13:49 | that a |

0:13:51 | if we plot the quality factor used for the jpeg attack |

0:13:54 | and the bit error rate that we get with that that |

0:13:58 | then when the attack is smile so we get a very high quality factor |

0:14:02 | nobody make the em performs |

0:14:03 | a bit worse |

0:14:04 | then normal of the M |

0:14:06 | and that's because |

0:14:07 | a here the probably do for or or you know what if think the M would be dominated by the |

0:14:11 | small magnitude coefficients |

0:14:12 | the top the the is centre each you four point station very close to each other |

0:14:18 | but for the rest |

0:14:19 | of the plot |

0:14:20 | we get here for this |

0:14:21 | the for this area of the of the plot |

0:14:24 | that for low quality factors so when when the tape a get that is a strong |

0:14:28 | low i think the M performs much better than |

0:14:31 | uh a normal the em and that's because the the robustness of the center each used for the high money |

0:14:36 | to coefficients is much |

0:14:38 | a a better |

0:14:40 | a you think the em than for normal the M |

0:14:44 | regarding the |

0:14:46 | another type of a tax so the awgn attack |

0:14:49 | we get exactly |

0:14:50 | the same results |

0:14:52 | when you're that is mine then a it make the em performs worse than the M but when the are |

0:14:57 | when the is a strong so we have uh a low piece a between the watermarking image and the |

0:15:02 | a what are marked and attacked image |

0:15:04 | then the performance of liberty make the M |

0:15:07 | it's much better than that of |

0:15:09 | uh uh norm of the uh |

0:15:11 | so to conclude |

0:15:13 | we have seen mean in this work that what was a can be used to derive a perceptually constraints |

0:15:18 | the informed watermarking systems |

0:15:20 | and |

0:15:21 | in this work a generalized version of logarithmic the M has been derived |

0:15:26 | and for this a generalized version a uh we have a study of the embedding distortion power and the probability |

0:15:32 | of decoding error |

0:15:34 | yeah yeah and the parameters optimize these two and few years |

0:15:39 | and we have seen also that peace proposed a scheme of performs the M when we consider a severe at |

0:15:44 | that |

0:15:44 | so for that's of your J P not that an awgn a text |

0:15:47 | how your performs a |

0:15:49 | norm of the M |

0:15:51 | that tense the |

0:15:55 | you |

0:16:00 | time of for questions you a wire are on is available |

0:16:03 | at once |

0:16:04 | i'm i'm sure that the or or or for for non the group |

0:16:07 | a a a a a so the which is much much better than me |

0:16:11 | fine |

0:16:13 | uh |

0:16:25 | which and this one |

0:16:29 | a yes |

0:16:30 | this this |

0:16:31 | if the uh it is fixed that um but in distortion |

0:16:35 | yeah and a power of the that is is for right yeah |

0:16:47 | yes of uh i |

0:16:48 | i i would have to check with them what they have exactly used |

0:16:52 | but of course some some measure sure of of uh distortion must be using a to to have a for |

0:16:57 | comparison |

0:17:07 | yeah |

0:17:15 | yeah |

0:17:27 | oh |

0:17:27 | yeah for sure but |

0:17:30 | in these sense using uh a a a normal perceptual you where |

0:17:34 | a a measure of distortion |

0:17:35 | then a a a a we get uh a a lower bound on the |

0:17:39 | difference |

0:17:39 | that we get |

0:17:41 | a with respect to lower make the em of course if we used a a a a perceptually were distortion |

0:17:45 | metric |

0:17:46 | then we would have a better results |

0:17:49 | that from here it is not |

0:17:52 | the no it's not clear that i'm have to check but i think that they have used is now |

0:17:59 | the perceptually to were yeah |

0:18:07 | yes your right |

0:18:08 | know bit |

0:18:16 | you |

0:18:17 | on this slide |

0:18:19 | i |

0:18:19 | thank you |

0:18:20 | so apparently a chose on the five lattice coefficient from bad |

0:18:24 | in my view point use |

0:18:26 | potentially introducing problems them of synchronization |

0:18:30 | on it seems to be that |

0:18:32 | scenes |

0:18:33 | the or is no longer reaching can of minus infinity when |

0:18:38 | for high quality meaning that you you no longer guarantee |

0:18:41 | the |

0:18:42 | efficiency of the scheme |

0:18:44 | i don't have one of the percent i'm betting efficient |

0:18:47 | have you looked that |

0:18:49 | you know which coefficient modify that time betting on |

0:18:53 | instead of |

0:18:54 | selecting a the five about this coefficient that detection you we use exactly the same quick would you have a |

0:19:00 | different care |

0:19:01 | i mean uh why this choice of the coefficient |

0:19:04 | well i can as and the joy but this |

0:19:06 | introduce the problem of synchronization i know |

0:19:09 | so |

0:19:10 | some is care is kind of |

0:19:12 | mergings the two problems in one here right |

0:19:15 | there a new modulation scheme and the uses |

0:19:17 | this synchronization problem |

0:19:19 | i you try to separate the two aspects no you here a synchronization is not consider the poll |

0:19:25 | so we have this completely the the |

0:19:28 | the synchronisation problem and of course you would be a problem you these |

0:19:32 | use if this truly followed |

0:19:34 | if we embedded in any coefficient than |

0:19:36 | one have those those strains |

0:19:41 | so i think you |

0:19:42 | much |

0:19:43 | a more thank you |