1 / 25

What we expect from Watermarking

What we expect from Watermarking. Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University. “We” who?. “We” who?.

cachet
Télécharger la présentation

What we expect from Watermarking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What we expect from Watermarking Scott Craver and Bede Liu Department of Electrical Engineering, Princeton University

  2. “We” who?

  3. “We” who? “What does breaking the code have to do with research? Research for what? Are you researching cloning, or the laws of physics? We’re not dealing with Boyle’s Law here….” --Jack Valenti, MPAA

  4. Security research • “Make and Break” research cycle • “Break” is a terrible misnomer. • Better systems derived from earlier mistakes • In practice, just about everything invented is broken the day it is designed. • Academic papers on watermarking schemes vastly outnumber academic papers identifying flaws in watermarking schemes.

  5. Security Design Principles • Have a well-defined goal/threat model • Be wary of Kerckhoffs’ Criterion • Utilize Community Resources. • Economy of final design

  6. Threat model and goals • Modeling the adversary

  7. The Kerckhoffs Criterion Assume an attacker will know the inner workings of your system—algorithm, source code, etc etc. • A system that violates this principle is often called an obscurity tactic, e.g. security through obscurity. • But, why do we have to be so pessimistic? • Obscurity does not scale. • Obscurity tactics are not amenable to quantitative analysis. • In practice, the worst case is pretty common.

  8. Modeling an adversary • The “common case” is hard to define in an adversarial situation. • A technology that fails 1% of the time can be made to fail 100% of the time. • Threat one: distribution of exploit code, circuit plans, for circumventing controls. • Threat two: widespread distribution of one clip after one successful circumvention.

  9. Community Resources • If there’s a vulnerability in your system, it has probably already been published in a journal somewhere. • The body of known attacks, flaws and vulnerabilities is large, and the result of years of community effort. • So is the body of successful cryptosystems. • But, does it really help to publish the specifications of your own system?

  10. Economy of Solution • A solution’s complexity should match the system’s design goals. • Problem: easing or removing goals that a finished product does not meet. • Nobody needs a solid gold speed bump with laser detectors and guard dogs.

  11. General Observations • People often overestimate security. • Flaws in many published security systems are often either well-known and completely preventable, or completely unavoidable for a given application. • Common mistakes, exploited by attackers: • Mismodeling the attacker • Mismodeling the human visual system • Ignoring application/implementation issues supposedly beyond the scope of the problem.

  12. Part II: video steganalysis

  13. Attacks on video watermarking • Problem: attacker digitizes video from analog source, distributes over Internet. • Solution model one: control digitizing. • Solution model two: control playback of digitized video in display devices. • Is there a “common case” in either model?

  14. Known attacks • Ingemar Cox and Jean-Paul Linnartz, IEEE JSAC 1998, v.16 no. 4, 587-593 • Pre-scrambling operation prior to detection • Attack on type-1 solution. S A/D S-1 Detect

  15. Known attacks • Ingemar Cox and Jean-Paul Linnartz, IEEE JSAC 1998, v.16 no. 4, 587-593 • In our threat model, this must be done in hardware. It must also commute with some pretty heavy processing. A/D S-1 Compress Decompress S S-1 VHS Recording VHS Playback

  16. Example • Scanline inversion • Subset of scanlines flipped in luminance • Strips of 8/16 lines for better commutativity with compression. • Question: what watermarking schemes are immune to this attack?

  17. Vulnerabilities in Digital Domain • Direct removal/damaging of watermarks • Oracle attacks • Reverse-engineering of mark parameters. • Collusion attacks • Mismatch attacks

  18. Direct Removal • Space of possible attacks is too large to model. • Direct removal attacks can be unguided, or based on information about the watermark embedding method. • Attacks often exploit flaws in our perceptual models.

  19. Oracle Attacks • Use watermark detector itself to guide removal process. • Easiest if attackers have their own watermark detector software. • Other attacks are probably more convenient. If you can force a an attacker to resort to an oracle attack using your own hardware, you’re probably doing something right.

  20. Mismatch attacks • No need to remove watermark, just render it unnoticeable by automated detectors. • Two kinds: subtle warping, and blatant scrambling (must be inverted later.) • Watermarks for automated detection need to be super-robust: not only must the information survive, but it must remain detectable by a known algorithm.

  21. Estimation of mark parameters • Robust watermarking leaves fingerprints. Super-robust marks leave big fingerprints. • Auto-collusion attacks take advantage of temporal redundancy [Boeuf & Stern `01] • Histogram analysis [Maes `98] [Fridrich et. al. 02] [Westfeld & Pfitzmann, `00]

  22. Histogram Analysis • An additive signature leaves statistical “fingerprints” in sample histogram. • Additive watermark  convolution of histograms. y(t) = x(t)+w(t) hy (x) = hx (x) * hw (x)

  23. Histogram Analysis • Transform statistical effect of watermark into an additive signal • hy (x) = hx (x) * hw (x) • gy (w) = log( FFT [hy (x) ] ) • gy (w) = gx (w) + gw (w) • Can detect by correlation [Harmsen & Pearlman, Proc. SPIE 2003]

  24. Histogram Analysis • Example: Additive spread spectrum mark • y(t) = x(t) + as(t), s(t) = { -1, 0, 1 } • G(w) = (1–p) + p cos(2pwa/N)

  25. Conclusions • For this watermarking application, the state of the art favors analysis. • Most new systems we examine possess elementary flaws. • The scientific community is here to help.

More Related