1 / 39

Lecturer: Moni Naor

Foundations of Cryptography Lecture 8: Application of GL, Next-bit unpredictability, Pseudo-Random Functions. Lecturer: Moni Naor. Recap of last week’s lecture. Hardcore Predicates and Pseudo-Random Generators Inner product is a hardcore predicate for all functions Proof via list decoding

Télécharger la présentation

Lecturer: Moni Naor

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Foundations of CryptographyLecture 8: Application of GL, Next-bit unpredictability, Pseudo-Random Functions. Lecturer:Moni Naor

  2. Recap of last week’s lecture • Hardcore Predicates and Pseudo-Random Generators • Inner product is a hardcore predicate for all functions • Proof via list decoding • Interpretations • Applications to Diffie-Hellman

  3. Inner Product Hardcore bit • The inner product bit: choose r R {0,1}n let h(x,r) = r ∙x = ∑ xi ri mod 2 Theorem [Goldreich-Levin]: for any one-way function the inner product is a hardcore predicate Proof structure: Algorithm A’for inverting f • There are many x’s for which A returns a correct answer (r ∙x) on ½+ε of the r ’s • Reconstruction algorithm R: take an algorithm A that guesses h(x,r) correctly with probability ½+ε over the r‘s and output a list of candidates for x • No use of the y info by R (except feeding to A) • Choose from the list the/an x such that f(x)=y The main step!

  4. Application: if subset is one-way, then it is a pseudo-random generator • Subset sum problem: given • n numbers 0 ≤ a1,a2 ,…,an ≤2m • Target sum y • Find subset S⊆ {1,...,n} ∑ i S ai,=y • Subset sum one-way function f:{0,1}mn+n → {0,1}m+mn f(a1,a2 ,…,an , x1,x2 ,…,xn ) = (a1,a2 ,…,an , ∑ i=1nxi ai mod 2m ) If m<n then we get out less bits then we put in. If m>n then we get out more bits then we put in. Theorem: if for m>n subset sum is a one-way function, then it is also a pseudo-random generator.

  5. Subset Sum Generator Idea of proof: use the distinguisher A to compute r∙x For simplicity: do the computation mod P for large prime P Given r {0,1}n and (a1,a2 ,…,an ,y) Generate new problem (a’1,a’2 ,…,a’n ,y’): • Choose c R ZP • Let a’i = ai if ri=0and ai=ai+c mod P if ri=1 • Guess k R{0,,n} - the value of ∑ xi ri • the number of locations where x and r are 1 • Let y’= y+c k mod P Run the distinguisher A on (a’1,a’2 ,…,a’n ,y’) • output what A says Xored with parity(k) Claim: if k is correct, then (a’1,a’2 ,…,a’n ,y’) is R pseudo-random Claim: for anyincorrect k:(a’1,a’2 ,…,a’n ,y’) is R random y’= z + (k-h)c mod P where z = ∑ i=1nxi a’i mod P and h=∑ xi ri Therefore: probability to guess r∙x is 1/n∙(½+ε) + (n-1)/n (½)= ½+ε/n Prob[A=‘0’|pseudo]= ½+ε Prob[A=‘0’|random]= ½ Pseudo-random random correct k Incorrect k Probability overa1, a2 ,…, an, xand rand randomness

  6. Interpretations of the Goldreich-Levin Theorem • A tool for constructing pseudo-random generators The main part of the proof: • A mechanism for translating `general confusion’ into randomness • Diffie-Hellman example • List decoding of Hadamard Codes • works in the other direction as well (for any code with good list decoding) • List decoding, as opposed to unique decoding, allows getting much closer to distance • `Explains’ unique decoding when prediction was 3/4+ε • Finding all linear functions agreeing with a function given in a black-box • Learning all Fourier coefficients larger than ε • If the Fourier coefficients are concentrated on a small set – can find them • True for AC0 circuits • Decision Trees

  7. Two important techniques for showing pseudo-randomness • Hybrid argument • Next-bit prediction and pseudo-randomness

  8. Hybrid argument To prove that two distributions D and D’ are indistinguishable: • suggest a collection of distributions D= D0, D1,… Dk =D’ If D and D’ can be distinguished, then there is a pair Di and Di+1 that can be distinguished. Advantage ε in distinguishing between D and D’ means advantage ε/k between someDi and Di+1 Use a distinguisher for the pair Di andDi+1to derive a contradiction

  9. Composing PRGs ℓ1 Composition Let • g1 be a (ℓ1, ℓ2 )-pseudo-random generator • g2 be a (ℓ2, ℓ3)-pseudo-random generator Consider g(x) = g2(g1(x)) Claim: g is a (ℓ1, ℓ3 )-pseudo-random generator Proof: consider three distributions on {0,1}ℓ3 • D1: y uniform in {0,1}ℓ3 • D2: y=g(x) for x uniform in {0,1}ℓ1 • D3: y=g2(z) for z uniform in {0,1}ℓ2 By assumption there is a distinguisher A between D1 and D2 A must either Distinguish between D1 and D3 - can use A use to distinguish g2 or Distinguish between D2 and D3 - can use A use to distinguish g1 ℓ2 ℓ3 triangle inequality

  10. Composing PRGs When composing • a generator secure against advantage ε1 and a • a generator secure against advantage ε2 we get security against advantage ε1+ε2 When composing the single bit expansion generator m times Loss in security is at mostε/m Hybrid argument: to prove that two distributions D and D’ are indistinguishable: suggest a collection of distributions D= D0, D1, … Dk =D’ such that If D and D’ can be distinguished, there is a pair Di and Di+1 that can be distinguished. Difference ε between D and D’ means ε/k between someDi and Di+1 Use such a distinguisher to derive a contradiction

  11. From single bit expansion to many bit expansionbased on one-way permutations Internal Configuration Input Output • Can make r and f(m)(x) public • But not any other internal state • Can make m as large as needed r x f(x) h(x,r) h(f(x),r) f(2)(x) f(3)(x) h(f(2)(x),r) f(m)(x) h(f(m-1)(x),r)

  12. From single bit expansion to many bit expansion Internal Configuration Input Output • Should not make any internal state – xi - public • Except xm • Can make m as large as needed g(x)|n+1 x1 =g(x)|1-n x x2 =g(x1)|1-n g(x1)|n+1 g:{0,1}n {0,1}n+1 x3 =g(x2)|1-n g(x2)|n+1 … … xm =g(xm-1)|1-n g(xm)|n+1

  13. Exercise • Let {Dn} and {D’n} be two distributions that are • Computationally indistinguishable • Polynomial time samplable • Suppose that {y1,… ym} are all sampled according to {Dn} or all are sampled according to {D’n} • Prove: no probabilistic polynomial time machine can tell, given {y1,… ym}, whether they were sampled from {Dn} or {D’n}

  14. Existence of PRGs What we have proved: Theorem: if pseudo-random generators stretching by a single bit exist, then pseudo-random generators stretching by any polynomial factor exist Theorem: if one-way permutations exist, then pseudo-random generators exist A much harder theorem to prove: Theorem [HILL]: if one-way functions exist, then pseudo-random generators exist

  15. Two important techniques for showing pseudo-randomness • Hybrid argument • Next-bit prediction and pseudo-randomness

  16. Next-bit Test Definition: a function g:{0,1}* → {0,1}* is next-bit unpredictable if: • It is polynomial time computable • It stretches the input |g(x)|>|x| • denote by ℓ(n) the length of the output on inputs of length n • If the input (seed) is random, then the output passes the next-bit test For any prefix 0≤ i< ℓ(n), for any PPT adversary A that is a predictor: receives the first i bits of y= g(x) and tries to guess the next bit, for any polynomial p(n) and sufficiently large n |Prob[A(yi,y2,…,yi) = yi+1] – 1/2 | < 1/p(n) Theorem: a function g:{0,1}* → {0,1}* is next-bit unpredictableif and only if it is a pseudo-random generator

  17. Proof of equivalence • If g is a presumed pseudo-random generator and there is a predictor for the next bit: can use it to distinguish Distinguisher: • If predictor is correct: guess ‘pseudo-random’ • If predictor is not-correct: guess ‘random’ • On outputs of g distinguisher is correct with probability at least 1/2 + 1/p(n) • On uniformly random inputs distinguisher is correct with probability exactly 1/2

  18. …Proof of equivalence • If there is distinguisher A for the output of g from random: form a sequence of distributions and use the successes of A to predict the next bit for some value y1, y2 yℓ-1yℓ y1, y2 yℓ-1 rℓ  y1, y2 yi ri+1  rℓ  r1, r2 rℓ-1 rℓ There exists an 0 · i ·ℓ-1 where A can distinguish Di from Di+1. Can use A to predict yi+1 ! Dℓ g(x)=y1, y2 yℓ r1, r2 rℓ2R Uℓ Dℓ-1 Di D0

  19. g S Next-block Undpredictable Suppose that g maps a given a seed S into a sequence of blocks let ℓ(n) be the number of blocks given a seed of length n • Passes the next-block unpredicatability test For any prefix 0 ≤ i< ℓ(n), for any probabilistic polynomial time adversary A that receives the first i blocks of y= g(x) and tries to guess the next block yi+1, for any polynomial p(n) and sufficiently large n |Prob[A(y1,y2,…,yi)= yi+1] | < 1/p(n) Homework: show how to convert a next-block unpredictable generator into a pseudo-random generator. y1y2, … ,

  20. Pseudo-random Generators and Encryption Output of a pseudo-random generator A pseudo-random string should be able to replace any random string • When running an algorithm • If the results are measurably different, can use as distinguisher • Basis of derandomization • For encrypting communication: as one-time pad • Need to define the type of desired protection of messages • Semantic Security • Indistinguishability of encryption Uniformity

  21. The world so far Signature Schemes Pseudo-random generators One-way functions Two guards Identification UOWHFs P  NP • Will soon see: • Computational Pseudorandomness • Shared-key Encryption and Authentication

  22. Pseudo-Random Generatorsconcrete version Gn:0,1m 0,1n Instead of passes all polynomial time statistical tests: (t,)-pseudo-random - no testArunning in timetcan distinguish with advantage

  23. Recall: Three Basic issues in cryptography • Identification • Authentication • Encryption Solve in a shared key environment A B S S

  24. G: S Identification: remote login using pseudo-random sequence A and B share a key S0,1k In order for A to identify itself to B • Generate sequence Gn(S) • For each identification session: send next block of Gn(S) Gn(S)

  25. Problems... • More than two parties • Malicious adversaries - add noise • Coordinating the location block number • Better approach: Challenge-Response

  26. Challenge-Response Protocol • B selects a random location and sends to A • Asends value at random location A B What’s this?

  27. Desired Properties • Very long string - prevent repetitions • Random access to the sequence • Unpredictability - cannot guess the value at a random location • even after seeing values at many parts of the string to the adversary’s choice. • Pseudo-randomness implies unpredictability • Not the other way around for blocks

  28. Authenticating Messages • A wants to send message M0,1nto B • B should be confident that A is indeed the sender of M One-time application: S =(a,b): wherea,bR 0,1n To authenticate M: supply aM b Computation is done in GF[2n]

  29. Problems and Solutions • Problems - same as for identification • If a very long random string available - • can use for one-time authentication • Works even if only random looking a,b A B Use this!

  30. Encryption of Messages • A wants to send message M0,1nto B • only B should be able to learn M One-time application: S = a: whereaR 0,1n To encrypt M send a M

  31. Encryption of Messages • If a very long random looking string available - • can use as in one-time encryption A B Use this!

  32. Pseudo-random Function • A way to provide an extremely long shared string

  33. Pseudo-random Functions Concrete Treatment: F: 0,1k  0,1n  0,1m key Domain Range DenoteY= FS (X) A family of functionsΦk ={FS | S0,1k is (t, , q)-pseudo-random if it is • Efficiently computable - random access and...

  34. (t,,q)-pseudo-random The tester A that can choose adaptively • X1 and gets Y1= FS (X1) • X2 and gets Y2 = FS (X2 ) … • Xq and gets Yq= FS (Xq) • Then A has to decide whether • FS R Φkor • FS R R n  m =  F| F:0,1n  0,1m 

  35. (t,,q)-pseudo-random For a function F chosen at random from (1) Φk ={FS | S0,1k  (2)R n  m =  F| F:0,1n  0,1m  For all t-time machines A that choose qlocations and try to distinguish (1) from (2) ProbA ‘1’  FR Fk - ProbA ‘1’  FRR n  m   

  36. Equivalent/Non-Equivalent Definitions • Instead of next bit test: for XX1,X2 ,,Xqchosen by A, decide whether given Yis • Y= FS (X)or • YR0,1m • Adaptive vs. Non-adaptive • Unpredictability vs. pseudo-randomness • A pseudo-random sequence generator g:0,1m 0,1n • a pseudo-random function on small domain 0,1log n0,1with key in 0,1m

  37. Application to the basic issues in cryptography Solution using a sharedkey S Identification: B to A: X R 0,1n A to B: Y= FS (X) A verifies Authentication: A to B: Y= FS (M) replay attack Encryption: A chooses XR 0,1n A to B: <X , Y= FS (X)M >

  38. Reading Assignment • Naor and Reingold, From Unpredictability to Indistinguishability: A Simple Construction of Pseudo-Random Functions from MACs, Crypto'98. www.wisdom.weizmann.ac.il/~naor/PAPERS/mac_abs.html • Gradwohl, Naor, Pinkas and Rothblum, Cryptographic and Physical Zero-Knowledge Proof Systems for Solutions of Sudoku Puzzles • Especially Section 1-3 www.wisdom.weizmann.ac.il/~naor/PAPERS/sudoku_abs.html

  39. Sources • Goldreich’s Foundations of Cryptography, volumes 1 and 2 • M. Blum and S. Micali, How to Generate Cryptographically Strong Sequences of Pseudo-Random Bits , SIAM J. on Computing, 1984. • O. Goldreich and L. Levin, A Hard-Core Predicate for all One-Way Functions, STOC 1989. • Goldreich, Goldwasser and Micali, How to construct random functions , Journal of the ACM 33, 1986, 792 - 807.

More Related