1 / 16

Functions That Are Hard on Average

Lower Bounds on the Query Complexity of Non-Uniform and Adaptive Reductions Showing Hardness Amplification. Functions That Are Hard on Average. Function g : {0,1} n → {0,1} is p - hard for a family of circuits if for every circuit in this family Pr x ← U n [ C (x )= g (x )]< p.

Télécharger la présentation

Functions That Are Hard on Average

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lower Bounds on the Query Complexity of Non-Uniform and Adaptive Reductions Showing Hardness Amplification

  2. Functions That Are Hard on Average Function g:{0,1}n→{0,1} is p-hard for a family of circuits if for every circuit in this family Prx←Un[C(x)=g(x)]<p. Boolean Circuit g

  3. Hardness Variations For simplicity assume δ=¹⁄₁₀ Hard on worst case Mildly average-case hard Strongly average-case hard p=1-δ p= ½+ε p=1 Circuits fail to compute noticeable fraction of inputs Circuits fail to compute some inputs Almost random guessing

  4. Applications of Functions That Are Hard on Average • Derandomization, Pseudorandomness[Yao82, BM84, NW94,…] • Cryptographic primitives [Yao82, BM84,…] These applications require functions that are very hard on average p=½+negligible

  5. Hardness Amplification worst case hard f or mildly average-case hard f strongly average-case hard g=Amp(f) Assumption:f is worst case/mildly average-case hard for circuits of size at most s. Conclusion:g=Amp(f)is strongly average-case hard for circuits of size at most s'. Example: Yao’s XOR lemma(δ=¹⁄₁₀) If function f(x) is (1-¹⁄₁₀)-hard for circuits of size at most s, then function g(x1,…,xk)=f(x1)⊕⋯⊕f(xk) is (½+ε)-hard for circuits of size at most s'=s·poly(ε)<s for large enough k, e.g. k=poly(log(¹⁄ε)).

  6. Hardness Amplification worst case hard f or mildly average-case hard f strongly average-case hard g=Amp(f) Assumption:f is worst case/mildly average-case hard for circuits of size at most s. Conclusion:g=Amp(f)is strongly average-case hard for circuits of size at most s'. Example: Direct product/concatenation lemma (δ=¹⁄₁₀) If a function f(x) is (1-¹⁄₁₀)-hard for circuits of size at most s, then function g(x1,…,xk)=f(x1)∘⋯∘f(xk) is ε-hard for circuits of size at most s'=s·poly(ε)<s for large enough k.

  7. Hardness Amplification worst case hard f or mildly average-case hard f strongly average-case hard g=Amp(f) Assumption:f is worst case/mildly average-case hard for circuits of size at most s. Conclusion:g=Amp(f)is strongly average-case hard for circuits of size at most s'. In all hardness amplification results in literature target function g=Amp(f) is hard for circuits of size s'<s (actually, s'≤ε·s). Implies that ε≥¹⁄s. Problematic in some applications

  8. Size Loss Natural question: Is this size loss necessary? Circuits of size at most s' Circuits of size at most s We will show that size loss is necessary for certain proof techniques.

  9. Proof by Reduction ∃C of size s such that Pr[C(x)=f(x)]≥1-δ f is (1-δ)–hard for size s iff ∃D of size s' such that Pr[D(y)=g(y)] ≥ ½+ε gis (½+ε)-hard for size s' Proof by reduction: Existence of circuit C is shown by providing a reduction R (an oracle procedure) s.t. C=RD.

  10. Various Notions of Reductions • “Uniform”: R(·) is an “efficient” oracle TM. • “Semi-uniform”: R(·) is a “small” oracle circuit. • “Non-uniform”: R(·) is a “small” oracle circuit that is also allowed to receive a “short advice string” α as a function of f and more importantly of the oracle D supplied to R. Known: These types of reductions cannot prove most hardness amplification results in literature [STV99]. More precisely: A non-uniform reduction R(·) satisfies: ∀D s.t. Pr[D(y)=g(y)]≥½+ε ∃α=α(f,D) s.t. Pr[RD(x,α)=f(x)]≥1-δ Essentially all known hardness amplification results are proven using such reductions

  11. Number of Queries  Size Loss If reduction R makes ≤ q queries to oracle D, then circuit C can be constructed by replacing every oracle gate with circuit D. s=size(C)≈q·size(D)+size(R)≥q·size(D)=q·s' In this work we show that every reduction must make q=Ω (¹⁄ε) queries. s'≤ε·s size loss!

  12. Our Results (Informally) Theorem*:Every reduction R(·) must make q=Ω (¹⁄ε) queries to oracle even if R(·) is non-uniform and adaptive(i.e., it makes adaptive queries). *For standard parameters of hardness amplification. Comparison to [SV10]: • [SV10] only handle non-uniform non-adaptive reductions. • Our results apply to a more general class of hardness amplification tasks (non-Boolean g, errorless amplification, “function-specific amplification”). • [SV10] gives a better bound of q=Ω(log(¹⁄δ)⁄ε2)for Boolean case. (Our results apply to a more general setup in which there are upper bounds of q=Ω(log(¹⁄δ)⁄ε).

  13. Something About the Proof Given functions f,g consider (distribution over) oracles D: • With probability 2ε, D(y)=g(y). • With probability 1-2ε, D(y) answers a fresh random bit. ⇒ Pr[D(y)=g(y)]≥½+ε (so that RD has to approx. compute f). Folklore e.g. [R]: A reduction R(·) that makes o(¹⁄ε) queries is unlikely to get any meaningful information. • RD cannot compute f (even approximately). • Contradiction (meaning that # of queries = Ω(¹⁄ε) ). Difficulties for general reductions: • Non-uniform reductions can use advice string to locate queries y on which D answers correctly. • Furthermore, adaptability may allow a non-uniform reduction to find “interesting” queries y (based on the adaptive strategy of whether or not previous queries answer).

  14. Something About the Proof Difficulties for general reductions: • Non-uniform reductions can use advice string to locate queries y on which D answers correctly. • Furthermore, adaptability may allow a non-uniform reduction to find “interesting” queries y (based whether or not previous queries answer). Our approach: • Following [SV10] we show that advice string does not help a non-adaptive reduction to find queries that answer (except for few queries which we can handle). • For adaptive reductions, consider “hybrid executions” of RD: • First t queries are not answered. • Remaining q-t queries are answered according to oracle distribution. • Hybrid executions are in some sense non-adaptive (the t+1’st query is known in advance). • We first bound the information that R gets on g in hybrid executions. • Then we show that with high probability real and hybrid executions coincide.

  15. Conclusion and Open Problems • Size loss is inherent in reductions showing hardness amplification even in the most general case (non-uniform and adaptive reductions). • Not an impossibility result for hardness amplification: only rules out certain proof techniques. • Limitations apply to essentially all proof techniques in literature. See discussion in paper. • Our lower bounds on # of queries match upper bounds in some (but not all) settings: • Direct product lemma with constant δ [KS03]. • Errorless amplification with constant δ [BS07,W11]. Open: • Improve lower bounds to match upper bounds: • For non-constant δ. • For Boolean target function. • Can we develop other proof techniques for hardness amplification? (See e.g., [GST05,A06,GT07]).

  16. Thank You…

More Related