1 / 34

Shannon ’ s theory part II

Shannon ’ s theory part II. Ref. Cryptography: theory and practice Douglas R. Stinson. Shannon ’ s theory. 1949, “ Communication theory of Secrecy Systems ” in Bell Systems Tech. Journal. Two issues:

Télécharger la présentation

Shannon ’ s theory part II

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Shannon’s theory part II Ref. Cryptography: theory and practice Douglas R. Stinson

  2. Shannon’s theory • 1949, “Communication theory of Secrecy Systems” in Bell Systems Tech. Journal. • Two issues: • What is the concept of perfect secrecy? Does there any cryptosystem provide perfect secrecy? • It is possible when a key is used for only one encryption • How to evaluate a cryptosystem when many plaintexts are encrypted using the same key?

  3. Perfect secrecy • Definition: A cryptosystem has perfect secrecy if Pr[x|y] = Pr[x] for allxP, yC • Idea: Oscar can obtain no information about the plaintext by observing the ciphertext Bob Alice y x Oscar Pr[Head]=1/2 Pr[Tail]=1/2 Case 1: Pr[Head |y]=1/2 Pr[Tail |y]=1/2 Case 2: Pr[Head |y]=1 Pr[Tail |y]=0

  4. Perfect secrecy when |K|=|C|=|P| P: 010 K: 101 C: 111 ? P: 111 K: 000 • (P,C,K,E,D) is a cryptosystem where |K|=|C|=|P|. • The cryptosystem provides perfect secrecy iff • every keys is used with equal probability 1/|K| • For every xP, yC, there is a unique key K such that • Ex. One-time pad in Z2

  5. Outline • Introduction • One-time pad • Elementary probability theory • Perfect secrecy • Entropy • Properties of entropy • Spurious keys and unicity distance • Product system

  6. Preview (1) • We want to know: the average amount of ciphertext required for an opponent to be able to uniquely compute the key, given enough computing time Ciphertext Plaintext yn xn K

  7. Preview (2) • That is, we want to know: How much information about the key is revealed by the ciphertext = conditional entropy H(K|Cn) • We need the tools of entropy

  8. Entropy (1) • Suppose we have a discrete random variable X • What is the information gained by the outcome of an experiment? • Ex. Let X represent the toss of a coin, Pr[head]=Pr[tail]=1/2 • For a coin toss, we could encode head by 1, and tail by 0 => i.e. 1 bit of information

  9. Entropy (2) • Ex. Random variable X with Pr[x1]=1/2, Pr[x2]=1/4, Pr[x3]=1/4 • The most efficient encoding is to encode x1 as 0, x2 as 10, x3 as 11. uncertainty information codeword length Pr[x1]=1/2 Pr[x2]=1/4

  10. Entropy (3) • Notice: probability 2-n => n bits p => -log2 p • Ex.(cont.) The average number of bits to encode X

  11. Entropy: definition • Suppose X is a discrete random variable which takes on values from a finite set X. Then, the entropy of the random variable X is defined as

  12. Entropy : example • Let P={a, b}, Pr[a]=1/4, Pr[b]=3/4. K={K1, K2, K3}, Pr[K1]=1/2, Pr[K2]=Pr[K3]= 1/4. encryption matrix: H(P)= H(K)=1.5, H(C)=1.85

  13. Properties of entropy (1) • Def: A real-valued function f is a strictly concave (凹) function on an interval I if f(y) f(x) y x

  14. Properties of entropy (2) • Jensen’s inequality: Suppose f is a continuous strictly concave function on I, Then Equality hold iff x1 =...=xn xn x1

  15. Properties of entropy (3) • Theorem:X is a random variable having a probability distribution which takes on the values on p1, p2,…pn, pi>0, 1  i  n. Then H(X)  log2 n with equality iff pi=1/n for all i * Uniform random variable has the maximum entropy

  16. Properties of entropy (4) • Proof:

  17. Entropy of a natural language (1) • HL : average information per letter in English 1. If the 26 alphabets are uniform random, = log2 26  4.70 2. Consider alphabet frequency H(P)  4.19

  18. Entropy of a natural language (2) 3. However, successive letters has correlations Ex. Digram, trigram Q: entropy of two or more random variables?

  19. Properties of entropy (5) • Def: • Theorem: H(X,Y)  H(X)+H(Y) with equality iff X and Y are independent • Proof: Let

  20. Entropy of a natural language (3) 3. Let Pn be the random variable that has as its probability distribution that of all n-gram of plaintext. tabulation of digrams => H(P2)/2  3.90 tabulation of trigrams => H(P3)/3 … tabulation of n-grams => H(Pn)/4 1.0  HL 1.5

  21. Entropy of a natural language (4) • Redundancy of L is defined as Take HL =1.25, RL = 0.75 English language is 75% redundant ! • We can compress English text to about one quarter of its original length

  22. Conditional entropy • Known any fixed value y on Y, information about random variable X • Conditional entropy: the average amount of information about X that is revealed by Y • Theorem: H(X,Y)=H(Y)+H(X|Y)

  23. Theorem about H(K|C) (1) • Let (P,C,K,E,D) be a cryptosystem, then H(K|C) = H(K) + H(P) – H(C) • Proof: H(K,P,C) = H(C|K,P) + H(K,P) Since key and plaintext uniquely determine the ciphertext H(C|K,P) = 0 H(K,P,C) = H(K,P) = H(K) + H(P) Key and plaintext are independent

  24. Theorem about H(K|C) (2) • We have • Similarly, • Now, H(K,P,C) = H(K,P) = H(K) + H(P) H(K,P,C) = H(K,C) = H(K) + H(C) H(K|C)= H(K,C)-H(C) = H(K,P,C)-H(C) = H(K)+H(P)-H(C)

  25. Results (1) • Define random variables as Ciphertext Plaintext Cn Pn K => Set |P|=|C|,

  26. Spurious(假) keys (1) • Ex. Oscar obtains ciphertext WNAJW, which is encrypted using a shift cipher • K=5, plaintext river • K=22, plaintext arena • One is the correct key, and the other is spurious • Goal: prove a bound on the expected number of spurious keys

  27. Spurious keys (2) • GivenyCn , the set of possible keys • The number of spurious keys |K(y)|-1 • The average number of spurious keys Ciphertext Plaintext Cn Pn K

  28. Relate H(K|Cn) to spurious keys (1) • By definition

  29. Relate H(K|Cn) to spurious keys (2) We have derived So

  30. Relate H(K|Cn) to spurious keys (3) • Theorem: |C|=|P| and keys are chosen equiprobably. The expected number of spurious keys • As n increases, right hand term => 0

  31. Relate H(K|Cn) to spurious keys (4) • Set • For substitution cipher, |P|=|C|=26, |K|=26! Unicity distance The average amount of ciphertext required for an opponent to be able to unique compute the key, given enough time

  32. Product cryptosystem • S1 = (P,P,K1,E1,D1), S2 = (P,P,K2,E2,D2) • The product of two cryptosystems is S1 = (P,P, K1K2,E,D) Encryption: Decryption:

  33. Product cryptosystem (cont.) • Two cryptosystem M and S commute if • Idempotent cryptosystem: S2 = S • Ex. Shift cipher • If a cryptosystem is not idempotent, then there is a potential increase in security by iterating it several times MxS = SxM

  34. How to find non-idempotent cryptosystem? • Thm: If S and M are both idempotent, and they commute, then SM will also be idempotent • Idea: find simple S and M such that they do not commute • SxM is possibly non-idempotent (SXM) x (SxM) = S x (M x S) xM =S x (S x M) x M =(S x S) x (M x M) =S x M

More Related