1 / 54

Cocktail Watermarking for Digital Image Protection

Cocktail Watermarking for Digital Image Protection. IEEE Transactions on Multimedia, C. S. Lu, S. K. Huang, C, J. Sze, and Mark Liao Institute of Information Science Academia Sinica, Taiwan. Motivation. What kind of things that a thief won’t steal? The degree of difficulty is high?

reuben
Télécharger la présentation

Cocktail Watermarking for Digital Image Protection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cocktail Watermarking for Digital Image Protection IEEE Transactions on Multimedia, C. S. Lu, S. K. Huang, C, J. Sze, and Mark Liao Institute of Information Science Academia Sinica, Taiwan

  2. Motivation • What kind of things that a thief won’t steal? • The degree of difficulty is high? • Will get hurt in the action? • Intended objects will be destroyed once they are out of the original place?

  3. Cox’s Method • Add watermark • Step 1:FDCT (Forward Discrete Cosine Transform) and select the largest n coefficients in magnitude. • Step 2:Generate nN(0, 1) noises. • Step 3:Embedding based on: • Step 4:IDCT (Inverse Discrete CosineTransform) • Result

  4. Cox’s Method (conti.) • Extract watermark • Step 1: FDCT of the original image and image in question. • Step 2: Select the largest coefficients in magnitude from both images. • Step 3: Invert the embedding process. • Step 4: Similarity • Result

  5. Host and Watermarked Image by Cox et al. (conti.) • Example 1: n = 2232 Host image Watermarked image PSNR=34.87

  6. Host and Watermarked Image by Cox et al. (conti.) • Example 2: n = 2232 Host image Watermarked image PSNR=36.21

  7. Result by Cox et al. (conti.) • Attacks:

  8. Result by Cox et al. (conti.) • Detector response w.r.t. different attacks: • Lena: maximum (n = 2232): 45.5

  9. Result by Cox et al. (conti.) • Detector response w.r.t. different attacks: • Kids: maximum (n = 2232): 45.4

  10. 傳統的方法(I) • NEC Approach • Largest 1000 AC coefficients global DCT 256 256 (+1250, -1006, -989, …, +30)

  11. 傳統的方法(II) • Random Modulation: Modu(+,+),Modu(-,-),Modu(+,-),Modu(-,+) Gaussian(0,1) magnitude +1250 +0.5 (+,+) -1006 +0.2 (-,+) -989 -0.3 (-,-) +885 -0.2 (+,-) +30 -0.1 (+,-)

  12. 攻擊的作用 • 通常攻擊(attacks)的作用 • 改變coefficients的magnitude • original watermarked image 靠 運 氣 ? Attack #2 Attack #1 coeff.1 coeff.2 coeff.3 coeff.4 coeff.1000 800 pairs matched detector response 0.8 150 pairs matched detector response 0.2

  13. 攻擊的分類 • Attacks that increase the magnitude of most transform coefficient • Sharpening, histogram equalization, edge-enhancing… • Attacks that decrease the magnitude of most transform coefficient • Blurring, compression…

  14. 傳統的方法(IV) • 缺點 • 人人可藏浮水印進入多媒體資料,但人人缺少理論基礎來證明robustness

  15. 雞尾酒式浮水印技術(I) • 緣起:民國88年1月底(2000.10 digibits) • 想法:可不可能放入多於一個浮水印,使其產生互補功能,藉以抵擋各種功能迥異的攻擊 • 實驗完成:民國88年3月27日 • 作法: Positive modulation (增加magnitude) Negative modulation (降低magnitude) 加 加 遇 + - 遇 + + 加 加 遇 - + 遇 - -

  16. 雞尾酒式浮水印技術(IV) • W(i)= bipolar(Tm(x,y)-T(x,y)) • We(i)=bipolar(Ta(x,y)-T(x,y)) =bipolar((Ta(x,y)-Tm(x,y))+ (Tm(x,y)-T(x,y))) =bipolar(Beta1+Beta2) • To obtain a higher detection, We(i) and W(i) should have the same sign. • Beta1 and Beta2 has the same sign • The influence of Beta1<the influence of Beta2 Complementary modulation JND

  17. 雞尾酒式浮水印技術(IV)

  18. Complementary Modulation • The proposed scheme embeds two watermarks, each of them playing complementing roles in resisting various kinds of attacks. • Values of the two watermarks are drawn from the same watermark sequence. However, they are embedded using different modulation rules • Positive modulation • Negative modulation

  19. 雞尾酒式浮水印技術(II) • 例子: positive modulation increase magnitude attack +1250 +0.5 -1006 -0.3 -989 -0.3 +885 +0.2 +30 +0.1 830 matched, 170 not matched -> detector response 0.66

  20. 雞尾酒式浮水印技術(III) • 例子: negative modulation decrease magnitude attack +1250 -0.3 -1006 +0.1 -989 +0.2 +885 -0.3 +30 -0.1 220 matched, 780 not matched -> detector response -0.56

  21. 雞尾酒式浮水印技術(V) • 最厲害的攻擊(50% - 50%攻擊) • 1000 coefficients depends on ‘’images’’ • ill-posed 剛好讓 50% coefficients 讓 50% coefficients worst case: lowest detector response

  22. Watermark Encoding

  23. Watermark Decoding

  24. 實驗結果 • 32種不同攻擊後的結果 • 互補效應的驗證 • 用可辨識的pattern為例 • notebook 展示 • Detector response vs. 漸差的影像品質 • Detector response vs. 漸增的壓縮倍數 • Combined attack • Probabilities of False positive and False negative

  25. Categories of Attacks (I) • Waveform attacks -- to impair the embedded watermark by manipulations of the whole watermarked media • Linear filtering, non-linear filtering, waveform-based compression (JPEG, EZW), addition of noise,…,etc • Detection-disabling attacks-- to break the correlation and to make the recovery of the watermarking impossible • Shear, pixel permutations sub-sampling, and other geometric distortions (Stir Mark, unZign)

  26. Waveform attacks (I) • Blurring (27.79/35.64) • high-frequency components are deleted

  27. Waveform attacks (II) • Sharpening (24.56/35.64) • high-frequency components are enhanced

  28. Waveform attacks (III) • JPEG compression (19.38/35.64) • quality factor 5% • severe blocky effects

  29. Waveform attacks (IV) • Embedding zero wavelet compression (23.66/35.64) • compression ratio 64:1 (SPIHT) • all wavelet coefficients are reduced

  30. Detection-disabling attacks (I) • Jitter (13.36/35.64) • with 4 pairs of columns deleted and duplicated • raising asynchronous phenomena

  31. Detection-disabling attacks (II) • StirMark (17.73/35.64) • all default parameters • non-linear operations • raising asynchronous phenomena

  32. StirMark Attack Digimac, SysCoP, JK_PGS,EikonaMark, Signnum, etc. are successfully destroyed

  33. Benchmark Tool: StirMark Apply minor geometric distortion • Stretching, shearing, shifting and rotation • Simulate printing/scanning process • Use ‘sinc’ for reconstruction function

  34. Detection-disabling attacks (III) • Rotation (14.28/35.64) • registration problem

  35. Detection-disabling attacks (IV) • Shear (13.73/35.64) • significant distortion

  36. Categories of Attacks (II) • Interpretation attacks – to confuse by producing fake host media or fake watermark deadlock problem • Removal attacks – to analyze the watermarked data and discard only the watermark collusion attacks, non-linear filter operations

  37. Interpretation Attack (I) Alice Bob original faked - + watermarked image original watermark faked watermark

  38. Removal attacks (I) • Collusion attack (34.39/35.64) • 4 watermarked images hidden with 4 different watermarks are averaged

  39. Attacked Watermarked Images host image watermarked image blurring median filtering rescaling sharpening 34.5 dB (15X15) 128X128 (11X11) histo. equalization dithering JPEG EZW StirMark StirMark+Rot180 (5%) (64:1) StirMark (5) jitter (5) flip bright/contrast Gaussian noise texturier

  40. Attacked Watermarked Images host image watermarked image diff. clouds diffuse dust extrude 34.5 dB 128X128 facet halftone mosaic motion blurring patchwork photocopy pimch ripple shear smart blurring thresholding (96) twirl

  41. Detection Result of Noise-style Watermark for the Tiger Image

  42. Result of Our Method (conti.) Sharpening 75% Sharpening 85% Dithering Stirmark Negative Positive

  43. Result of Our Method (conti.) 5 Stirmark Oil Painting Embossing DeSpeckle Negative Positive

  44. Result of Our Method (conti.) Pixelization Equalization Rotation Rotation Negative Positive

  45. Comparisons

  46. Single-type Attack (Gaussian blurring) with decreasing qualities 3x3 15x15 31x31 The tiger image 128x128 in size is watermarked

  47. Single-type Attack (SPIHT) with increasing compression ratios 4:1 512:1 The tiger image 128x128 in size is watermarked

  48. Combined Attack (Repeated Attack) Attacks are composed of blurring (B) and histogram equalization (H) B BHB BHBH BH

  49. Probability of False Negative P1 0.5 0.6 0.61 0.62 0.65 0.7 T 1 0.15 1 0.2 Probability of False Positive P1 0.5 T 0.15 0.2

  50. 商業用途 • 防篡改 • 照相機 • 錄音機 • 錄影機 • 監視器 Cocktail watermarking with sensors

More Related