1 / 44

Codes and Pseudorandomness : A Survey

Codes and Pseudorandomness : A Survey. David Zuckerman University of Texas at Austin. Randomness and Computing. Randomness extremely useful in computing. Randomized algorithms Monte Carlo simulations Cryptography Distributed computing Problem: high-quality randomness expensive.

mariav
Télécharger la présentation

Codes and Pseudorandomness : A Survey

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Codes and Pseudorandomness:A Survey David Zuckerman University of Texas at Austin

  2. Randomness and Computing • Randomness extremely useful in computing. • Randomized algorithms • Monte Carlo simulations • Cryptography • Distributed computing • Problem: high-quality randomness expensive.

  3. What is minimal randomness requirement? • Can we eliminate randomness completely? • If not: • Can we minimize quantity of randomness? • Can we minimize quality of randomness? • What does this mean?

  4. What is minimal randomness requirement? • Can we eliminate randomness completely? • If not: • Can we minimize quantity of randomness? • Pseudorandom generator • Can we minimize quality of randomness? • Randomness extractor

  5. Outline • PRGs and Codes • Intro to PRGs. • Various connections. • Extractors and Codes • Intro to Extractors. • Connections with list decoding. • Non-Malleable Codes and Extractors. • Conclusions

  6. Pseudorandom Numbers • Computers rely on pseudorandom generators: PRG 141592653589793238 71294 long “random-enough” string short random string What does “random enough” mean?

  7. Modern Approach to PRGs[Blum-Micali 1982, Yao 1982] Require PRG to “fool” all efficient algorithms. Alg random ≈ same behavior Alg pseudorandom

  8. Which efficient algorithms? • Poly-time PRG fooling all polynomial-time circuits implies NP≠P. • So either: • Make unproven assumption. • Try to fool interesting subclasses of algorithms.

  9. Existence vs. Explicit Construction • Most functions are excellent PRGs. • Challenge: find explicit one. • Most codes have excellent properties. • Known: good explicit codes. • Can codes give good PRGs?

  10. Idea 1: PRG = Random Codeword • Choose a random codewordin [n,k] code. • n random variables, |sample space| = 2k. • But: linear code can’t fool all linear tests. • t-wise independence: • Dual distance > t  any t coordinates independ as rv’s, since any t columns of G linindepend. • t=2: Hadamard code, |Ω|=n+1 [Lancaster 1965] • t odd. Dual BCH code, |Ω| = 2(n+1)(t-1)/2 [ABI ‘86]

  11. Idea 2: PRG=Random Column of G Sample space Ω • k random variables, |sample space| = n. • If dual distance > 4, then Ω is a Sidon set: • All pairwise sums distinct. • Dual BCH code: |Ω|= 2(k-1)/2. • Don’t need row of 1’s; Ω={(x,x3)|x in F2k/2}. Generator matrix G k

  12. PRG=Random Column of G Sample space Ω X1 … Generator matrix G • Say all codewords≠0 have relative wt ½±ε. • corresponds to a codeword, S≠Φ. • ½-ε ≤ Pr[ =1] ≤ ½+ε: ε-biased space [NN ‘90] • RS concatHadamard: |Ω| = O(k2/ε2) [AGHP ‘90] • AG concatHadamard: |Ω| = O(k/ε3) • Degree < genus: |Ω| = O(k/ε2)5/4 [BT 2009] • Optimal, non-explicit: O(k/ε2). Ignored logs. Xk

  13. PRGs from Hard Functions [NW ‘88] • PRGs lower bounds. • Nisan-Wigderson: lower bounds  PRGs. • Suppose f is hard on average. Design (wt w code): |Si|=w f f Sj Si seed

  14. Worst Average Case Hardness [L,BF] • Given: worst-case hard f:{0,1}w{0,1}. • Encode f using RM code as g:FqwFq. • g=unique multilinear function s.t. g=f on {0,1}w. • g is avg-case hard. • Efficiently compute g on 1-1/(4m) fraction Efficiently compute g everywhere whp. • Pick random line with L(0)=x. • Degree(g(L(.))) ≤ w. • Interpolate g(L(0)) from g(L(1)),…,g(L(w+1)).

  15. Local Decodability • Compute any bit of message whp by querying at most r bits of encoding. • RM codes. • New family: Matching Vector Codes [Yekhanin, Efremenko,…]. Beats RM codes. • Stronger notion: Local correctability. • Can compute any bit of encoding. • RM codes.

  16. List Decoding [Elias 1957] • Output list of all close codewords. • Can sometimes decode beyond distance/2. • Efficient algorithms for: • Hadamard [Goldreich-Levin] • Reed-Solomon [Sudan, Guruswami-Sudan] • AG codes [Shokrollahi-Wasserman, GS] • Reed-Muller [large q: STV, q=2: GKZ] • Certain concatenated codes [GS, STV, GI, BM] • PV codes [Parvaresh-Vardy] • Folded RS [Guruswami-Rudra] • Multiplicity codes [Kopparty, Guruswami-Wang]

  17. PRGs and Hardcore Bits[Goldreich-Levin 1989, Impagliazzo 1997] • Given one-way function f: • Easy to compute, hard to invert. • E.g., f(x)=gx mod p (g generator, p prime). • Goal: computing bit b(x) hard given f(x). • Thm: Suppose C is (locally) list decodable. Then b(x,i) = C(x)i hard given f(x), i. • Pf idea: Suppose easy. List decode few candidates for x. Check if f(candidate)=f(x).

  18. PRG from Hardcore Bits • Given one-way permutation f, hardcore b. • PRG(x,i)=b(x,i),b(f(x),i),b(f2(x),i),…

  19. List Decoding Related to Randomness Extractors

  20. General Weak Random Source [Z ‘90] • Random variable X on {0,1}n. • General model: min-entropy • Flat source: • Uniform on A, |A| ≥ 2k. {0,1}n |A| ³ 2k

  21. General Weak Random Source [Z ‘90] • Can arise in different ways: • Physical source of randomness. • Cryptography: condition on adversary’s information, e.g. bounded storage model. • Pseudorandom generators (for space s machines): condition on TM configuration.

  22. Goal: Extract Randomness m bits n bits Ext statistical error  Problem: Impossible, even for k=n-1, m=1, ε<1/2.

  23. Impossibility Proof • Suppose f:{0,1}n{0,1} satisfies ∀sources X with H∞(X) ≥ n-1, f(X) ≈ U. f-1(0) f-1(1) Take X=f-1(0)

  24. Randomness Extractor: short seed[Nisan-Z ‘93,…, Guruswami-Umans-Vadhan ‘07] d=O(log (n/ε)) random bit seed Y m =.99k bits n bits Ext statistical error  Strong extractor: (Ext(X,Y),Y) ≈ Uniform

  25. Graph-Theoretic View: “Expansion” N=2n output  uniform  K=2k M=2m Ext(x,y) x y  (1-)M D=2d

  26. Alternate View M=2m N=2n D=2d S BADS x

  27. Extractor Codes via Alt-View[Ta-Shma-Z 2001] • List recovery – generalizes list decoding. S=(S1,…,SD), agreement = |{i|xi in Si}| |{Codewords with agreement ≥(μ(S) + ε)D}| ≤ |BADS|. • Can construct extractor codes with efficient decoding. • Give hardcore bits Ext(x,y) wrt 1-way (f(x),y).

  28. Leftover Hash Lemma Johnson Bound • Johnson bound: An [n,k,(½−ε2)n]-code has <L=1/ε2codewords within distance ½-ε of received word r. Alt pf [TZ ‘01]: • Let V=close codewords,D=distribution (i,vi), i in [n], v in V. |D-U|≥ε: Pr[(i,vi)= (i,ri)]. • If |V|≥L: • collision-pr(D)<(1/n)(1/|V| + 1-d/n)=(1+4ε2)/(2n) • Implies |D-U|< ε. Contradiction.

  29. Codes  Extractors • PRGs + Codes  Extractor [Trevisan 1999] • RM Codes Extractor [Ta-Shma, Z, Safra 2001; Shaltiel, Umans 2001] • Parvaresh-Vardy Codes Extractor [Guruswami, Umans, Vadhan 2007]

  30. 2-Stage Extractor  + O(log n) random bits Condense: .9 Extract: + O(log n) random bits  uniform

  31. Parvaresh-Vardy codes Condenser[Guruswami-Umans-Vadhan 2007] • Fqfinite field • parameter h ≤ q • deg. n polynomial E(Y) irreducible over Fq • source: degree n-1 univariate polynomial f • define fi(Y) = fhi(Y) mod E(Y) C(f, y 2Fq) = (y, f0(y), f1(y), f2(y), , fm-1(y))

  32. Independent Sources n/2 bits n/2 bits Ext m =Ω(k) bits statistical error 

  33. Bounds for 2 Independent Sources • Classical: H∞ (X) > n/2. • Lindsey Lemma: inner product. • Bourgain: H∞ (X) > .4999n. • Existence: H∞ (X) > 2 log n.

  34. Privacy Amplification With Active Adversary public • Problem: Active adversary could change Y to Y’. Y Pick Shared secret = Ext(X,Y).

  35. Active Adversary • Can arbitrarily insert, delete, modify, and reorder messages. • E.g., can run several rounds with one party before resuming execution with other party.

  36. Non-Malleable Extractor[Dodis-Wichs 2009] • Strong extractor: (Ext(X,Y),Y) ≈ (U,Y). • nmExt is a non-malleable extractor if for arbitrary A:{0,1}d{0,1}d with y’ = A(y) ≠ y. (nmExt(X,Y),nmExt(X,Y’),Y) ≈ (U,nmExt(X,Y’),Y) • nmExt can’t ignore a bit of the seed. • Existence: k > log log n + c, d = log n + O(1), m = (k-log d)/2.01. • Gives privacy amplification with active adversary in 2 rounds with optimal entropy loss.

  37. Explicit Non-Malleable Extractor • Even k=n-1, m=1 nontrivial. • E.g., Ext(x,y) = x.y. X=0??...?, y’=A(y) flips first bit, x.y’= x.y. • Dodis-Li-Wooley-Z 2011: H∞ (X) > n/2. • Cohen-Raz-Segev 2012: Seed length O(log n). • Li 2012: H∞ (X) > .499n. • Connection with 2-source extractors.

  38. A Simple 1-Bit Construction [Li] • Sidon set: set S with all s+t, s,t in S, distinct. • Thm [Li]: f(x,y) = x.y, y uniform from S, nonmalleable extractor for H∞ (X) > n/2. • Proof: H∞ (Y) = n/2, so X.Y ≈ U (Lindsey’s lemma). • Suffices to show X.Y+X.A(Y) ≈ U (XOR lemma). • X.Y+X.A(Y) = X.(Y+A(Y)). • H∞ (Y+A(Y)) ≥ H∞ (Y)-1 = n/2 - 1.

  39. Non-Malleable Codes[Dziembowski, Pietrzak, Wichs 2010] • Adversary tampers with Enc(m) via f in F. • Ideally Dec(f(Enc(m)) = m or “error” • Impossible if f(x) = Enc(m’) allowed. • Dec(f(Enc(m)) = m or is independent of m. • Randomized encoding allowed. • Prob method: exist if |F| < 22αn, α<1. • Explicit? • Codes for f(x1,…,xn)=f1(x1),…,fn(xn).

  40. Split-State Tampering • f(x,y)=g(x),h(y) |x|=|y|=n/2. • 2-source ext for H(X)+H(Y)>2n/3 codes for 1-bit messages [Dz, Kazana, Obremski 2013] • Poly rate: n=k7+o(1) via additive combinatorics [Aggarwal, Dodis, Lovett 2013]. • Constant rate if can construct nonmalleable 2-source extractors for entropy rate .99. [Cheraghchi, Guruswami 2013].

  41. Non-Malleable 2-Source Extractor[Cheraghchi, Guruswami 2013] • X and Y independent weak sources. • Think of H∞(X)=H∞(Y)=.99(n/2). • For all A1, A2,x’=A1(x)≠x, y’=A2(y)≠y: • (nmExt(X,Y),nmExt(X,Y’)) ≈ (U,nmExt(X,Y’)) • (nmExt(X,Y),nmExt(X’,Y)) ≈ (U,nmExt(X’,Y)) • (nmExt(X,Y),nmExt(X’,Y’)) ≈ (U,nmExt(X’,Y’)) • Open question: explicit construction.

  42. Key Properties of Codes • Dual distance  k-wise independence, Sidon sets. • Relative distance ≈ ½  small-bias spaces. • Local decodability Amplifying hardness of functions for PRGs, extractors. • List decodability Cryptographic PRGs, extractors. • Non-malleability  Non-malleable 2-source extractors.

  43. Open Questions • Construct ε–biased spaces of size n=O(k/ε2). • [n=O(k/ε2),k,(½-ε)n] codes. • 2-source extractors for entropy rate α, any α>0. • Non-malleable extractors for H∞(X)=αn. • Non-malleable codes of constant rate. • Non-malleable 2-source extractors. • Other Applications & Connections.

  44. Thank you!

More Related