1 / 55

Rafael Pass Cornell University

Unprovable Security (how to use meta reductions). Rafael Pass Cornell University. Modern Cryptography. Precisely define security goal (e.g., commitment scheme) Precisely stipulate computational intractability assumption (e.g., hardness of factoring)

jud
Télécharger la présentation

Rafael Pass Cornell University

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Unprovable Security (how to use meta reductions) Rafael PassCornell University

  2. Modern Cryptography Precisely define security goal (e.g., commitment scheme) Precisely stipulate computational intractability assumption(e.g., hardness of factoring) Security Reduction:provethat any attacker A that break security of scheme π can be used to violate the intractability assumption.

  3. An Example:Commitments from OWPs [GL,Blum] • Task: Commitment Scheme • Binding + Hiding • Non-interactive • Intractability Assumption: existence of OWP f • f is easy to compute but hard to invert • Security reduction [Goldreich-Levin]: Comf,PPT R s. t. for every algorithm A that breaks hiding of Comf, RAinverts f • Reduction R only uses attacker A as a black-box • In this case also the construction is black-boc

  4. Turing Reductions C f(r) RA r Security reduction:RAbreaks C whenever Abreaks commitment Reduction R may rewind and restart A. Our Focus: R is PPT (later, mention nuPPT)

  5. Provable Security • In the last three decades, lots of amazing tasks have been securely realized under well-studied intractability assumptions • Key Exchange, Public-Key Encryption, Secure Computation, Zero-Knowledge, PIR, Secure Voting, Identity based encryption, Fully homomorphic Encryption, Leakage-resilient Encryption… • But: several tasks/schemes have resisted security reductions under well-studied intractability assumptions.

  6. Unprovable Security We have candidate constructions of primitives that we cannot reduce to some “natural” intractability assumption. • Schnorr’s identification protocol • Commitment secure against selective opening • One-more inversion problems • Blind signatures • Witness Hiding of GMW/BlumHC • SNARGs • Statistical NIZK • Non-interactive non-malleable commitments • Indistinguishability obfuscation TYPE 1: [P11] TYPE 2: [GW11],[P13] Don’t know WHY?

  7. Unprovability Results • Tasks can be proven secure using an (arbitrary) black-box security reductions, based on any “standard” intractability assumption. • Any security reduction R itself must constitute an attack on the intractability assumption • Rules out also non-black-box constructions: only security reduction needs to be black-box

  8. Common Proof Approach Meta-reduction [BV’99,(Bra’79)] N=pq C RA p,q 1. Design a particular attacker A that breaks scheme 2. Show how to “indistinguishably” emulate attacker in poly-time. Note: most “meta reductions” (e.g., [BV’99,AF’07,HH’09,HRS’09,FS’10]) only work for “restricted” types of R (e.g., algebraic) Today: unrestricted reductions, R is any PPT [P’06,PTV’10, P’11,GW’11,P’13,CMLP’13]:

  9. Intractability Assumptions • Following [Naor’03], we model an intractability assumption as a interaction between a Challenger C and an attacker A. • The goal of A is to make C accept • C may be computationally unbounded (different from [Naor’03]) C f(r) A r Intractability assumption (C,t) : “no PPT A can make C output 1 w.p significantly above t” A breaks (C,t) if A makes C(1^n) output 1 w.pt +1/poly(n)

  10. Intractability Assumptions • Following [Naor’03], we model an intractability assumption as a interaction between a Challenger C and an attacker A. C f(r) A r • 2-round: f is a OWF, Factoring, G is a PRG, DDH, Factoring, … • O(1)-round:Enc is semantically secure (FHE), (P,V) is WH • O(1)-round with unbounded C: (P,V) is sound • Unbounded round: F is a PRF, Sig is a GMR-secure, Enc is CCA-secure, (P,V) is sequentially witness hiding, one more discrete log,…

  11. Two classes of intractability assumptions • Bounded-round Assumptions • TYPE I primitives are not themselves bounded-round assumptions • We separate them from such assumptions • Efficient Challenger (falsifiable) Assumptions • TYPE II primitives are not themselves efficient-challenger assumption • We separate them from such assumptions

  12. Part I – Separations From Bounded-Round AssumptionsSchnorrSelective openingOne-more InversionBlind SignaturesWitness Hiding

  13. Schnorr’s Identification Scheme [Sch’89] • One of the most famous and widely employed identification schemes (e.g., Blackberry router protocol) • Secure under a passive “eaves-dropper” attack based on the discrete logarithm assumption • What about active attacks? • [BP’02] proven it secure under a new type of “one-more” inversion assumption • Can we based security on more standard assumptions?

  14. (Non-interactive) Commitment Schemes under Selective Opening [DNRS’99] • A commits to n values v1, …, vn • B adaptively asks A to open up, say, half of them. • Security: Unopened commitments remain “hidden” • Problem originated in the distributed computing literature over 25 years ago • Can we base selective opening security of non-interactive commitments on any standard assumption?

  15. One-More Inversion Assumptions [BNPS’02] • You get n target points y1,…, ynin group G with generator g. • Can you find the discrete logarithm to all n of them if you may make n-1 queries to a discrete logarithm oracle (for G and g) • One-more DLOG assumption states that no PPT algorithm can succeed with non-negligible probability • [BNPS] and follow-up work: Very useful for proving security of practical schemes • Can the one-more DLOG assumption be based on more standard assumptions? • What about if we weaken the assumption and only give the attacker n^eps queries?

  16. Unique Non-interative Blind Signatures [Chaum’82] • Signature Scheme where a user U may ask the signer S to sign a message m, while keeping m hidden from S. • Futhermore, there only exists a single valid signature per message • Chaum provided a first implementation in 1982; very useful in e.g., E-cash • [BNPS] give a proof of security in the Random Oracle Model based on a one-more RSA assumption. • Can we base security of Chaum’s scheme, or any other unique blind signature scheme, on any standard assumption?

  17. Sequential Witness Hiding of O(1)-round public-coin protocols • Take any of the classic O(1)-round public-coin ZK protocols (e.g., GMR,GMW, Blum) • Repeat them in parallel to get negligible soundness error. • Do they suddenly leak the witness to the statement proved? [Feige’90] • Sequential WH: No verifier can recover the witness after sequentially participating in polynomially many proofs. • Can sequential WH of those protocols be based on any standard assumption?

  18. Theorem Let (C,t) be a k(.)-round intractability assumption where k is a polynomial. If there exists a PPT reduction R for basing security (of any of previously mentioned schemes) on the hardness of (C,t), then there exists a PPT attacker B that breaks (C,t) Note: restriction on C being bounded-round is necessary; otherwise we include the assumptions that the schemes are secure!

  19. Related Work • Several earlier lower bounds: • One-more inversion assumptions [BMV’08] • Selective opening [BHY’09] • Witness Hiding [P’06,HRS’09,PTV’10] • Blind Signatures [FS’10] • But they only consider restricted types of reductions (a la [FF’93,BT’04]), or (restricted types of) black-box constructions(a la [IR’88]) • Only exceptions [P’06,PTV’10] provide conditional lower-bounds on constructions of certain types of WH proofs based on OWF • Our result applies to ANYTuring security reduction and also non-black-box constructions.

  20. Proof Outline • Sequential Witness Hiding is “complete” • A positive answer to any of the questions implies the existence of a “special” O(1)-round sequential WH proof/argument for a language with unique witnesses. • Sequential WH of “special” O(1)-round proofs/arguments for languages with unique witnesses cannot be based on poly-round intractability assumptions using a Turing reduction.

  21. Special-sound proofs[CDS,Bl] X is true X is true a a b0 b1 b0, b1R {0,1}n “slot” c0 c1 Can extract a witness w • Relaxations: • multiple rounds • computationally sound protocols (a.k.a. arguments) • need p(n) transcripts (instead of just 2) to extract w Generalized special-sound

  22. Main Lemma • Let (C,t) be a k(.)-round intractability assumption where k is a polynomial. Let (P,V) be a O(1)-round generalized special-sound proof of a language L with unique witnesses. • If there exists a PPT reduction R for basing sequential WH of (P,V) on the hardness of (C,t), then there exists a PPT attacker B that breaks (C,t)

  23. Main Lemma • Let (C,t) be a k(.)-round intractability assumption where k is a polynomial. Let (P,V) be a O(1)-round generalized special-sound proof of a language L with unique witnesses. • Let q(n) = ω(n+2k(n)+1). Assume there exists a PPT reduction R and a polynomial ps.t., for every attacker A that completely recovers the (unique) witness to every statement x after receiving q(|x|) sequential proofs of x, RA breaks (C,t) with probability 1/p(n) for infinitely many n. • Then there exist some polynomial p’ and a PPT B such that B breaks (C,t) with probability 1/p’(n) for infinitely many n.

  24. Proof Idea C f(r) RA r Assume RAbreaks C whenever A completely recovers witness of any statement x it hears sufficiently many sequential proofs of. Goal:Emulate in PPT a successful A for R thus break C in PPT (the idea goes back to [BV’99] “meta-reduction”, and even earlier [Bra’79])

  25. Proof Idea C f(r) RA r Assume RAbreaks C whenever A completely recovers witness of any statement x it hears sufficiently many sequential proofs of. Two steps: 1. Design a particular attacker A that breaks sequential WH. 2. Show how to perfectly emulate A for R.

  26. Proof Idea x C f(r) R w r Assume reduction R is “nice” [BMV’08,HRS’09,FS’10]: only asks a single query to its oracle (or asks queries sequentially) Step 1: Let A act as an honest receiver of a single proof of x, and thensimply output a witness w for x (using brute-force) Step 2: To emulate A for R, simply “rewind” R feeding it a new “challenge” and extract witness Unique witness requirement crucial to ensure we actually emulate A (c.f., [Bra’79], [AGGM’05])

  27. General Reductions: Problem I x1 x2 x3 rewinding here: redo work of nested sessions R w3 w2 w1 • Problem: R might nest its oracle calls. • “naïve extraction” requires exponential time (c.f., Concurrent ZK [DNS’99]) • Solution: If we require R to provide many sequential proofs, then we can find • (recursively) find one “slot” where nesting depth is “small” : • require A to only output a witness after hearing many proofs • Use Techniques reminiscent of Concurrent ZK a la [RK’99], [CPS’10]

  28. General Reductions: Problem II Problem: R might not only nest its oracle calls, but may also rewind its oracle 1. R might notice that it is being rewound 2. Special-soundness might no longer hold under such rewindings. Solution: Let A generate its messages by applying a random function to the transcript of messages Use Techniques reminiscent of Black-box ZK lower-bound of [GK’90],[P’06] O(1)-round restriction on (P,V) is here cruicial

  29. General Reductions: Problem III x C R w Problem: Oracle calls may be intertwined with interaction with C Solution: If we require R to provide many sequential proofs, then at least one proof (“slot”) is guaranteed not to intertwine

  30. Specifying the Attacker A x • Upon receiving a statement x from the reduction: • Wait until receiving q(n) = ω(n + 2k(n)+1) sequential proofs of x • Generate messages in each proof by applying RO to transcript • After hearing q(n) proofs of x, recover a witness w using brute force and return it

  31. Specifying the Meta Reduction B x B externally interacts with C and internally with R: High-level idea Honestly emulate A for R but attempt to extract a witness wfor every statement x proved by R by appropriately rewinding R at some “slot”

  32. Rewind slot if: No external message to C inside # other slots inside is “small” x x3 w3 C R

  33. Specifying the Meta Reduction B x • B externally interacts with C and internally with R • Honestly emulate A for R until reaching the “closing” of a slotsuch that • No messages with C are exchanged inside the slot • The number of slots inside it is “small” • Rewind the slot (using new challenges) until a witness has been extracted; cut off each rewinding if R tries to senda message to C, or number of slots inside is “big” • Expected number of rewinding is <= 1

  34. Specifying the Meta Reduction B x • Recursive procedure: we may perform rewindins inside the rewindings. • “Small” at depth d : M/n^d where m = runtime of R • => Recursive depth is bounded by constant c • => Expected running-time is bounded • The crux is to show that extraction always succeeds: • Once we start rewinding we continue until we another accepting transcript. • Need to show: • for every statement x, there exist some recursive depth and some slot for x that will be rewound. • Special-soundness rarely fails

  35. Claim: For every proof, at least k+1 slots will be rewound at some depth d There are only a constant number of depths. 2. Since we have ω(n+2k+1) slots, there must exists some depth d during which at least n+2k slots are executed. 3. But on depth d, there can be at most M/nd slots So at least 2k+1 slots have at most M/nd+1 slots inside it. 4. So for at least k+1 of them there are no “external” message Overkill? (why not enough with just 1?) If C is efficient 1 suffices; otherwise not.

  36. Proof Outline • Sequential Witness Hiding is “complete” • A positive answer to any of the questions implies the existence of a “special” O(1)-round sequential WH proof/argument for a language with unique witnesses. • Sequential WH of “special” O(1)-round proofs/arguments for languages with unique witnesses cannot be based on poly-round intractability assumptions using a Turing reduction.

  37. Proof Outline • Sequential Witness Hiding is “complete” • A positive answer to any of the questions implies the existence of a “special” O(1)-round sequential WH proof/argument for a language with unique witnesses. • Sequential WH of “special” O(1)-round proofs/arguments for languages with unique witnesses cannot be based on poly-round intractability assumptions using a Turing reduction.

  38. Completing step 1 • If Schnorr is not WH it cannot be an ID scheme; it doesn’t have unique witnesses (for malformed public-keys) but can be easily modified to have a unique witness. • Commitment secure against selective opening => GMW G3C is “weakly” sequentially witness hiding • One-more inversion and Blind Sigs are slightly more complicated

  39. Part II – Separations from Falsifiable Assumptionsnon-malleable commitmentsstatistical NIZK(succintnon-interactive arguments [GW])

  40. Non-Malleable Commitments[DolevDworkNaor’91] MIM Sender Receiver/Sender Receiver i j Ci(v) Cj(v’) i  j Non-malleability: ifthen, v’ is “independent” of v • Non-interactive (or even 2 message)? • Useful in both theory and practice • Partial positive results : based on adaptive OWP [OPV’09] • Easy in the RO Model • Original work of DDN: O(log n) round from OWF • Later work: [B’02,P’04,PR’05,…,LP’11,G’11]: O(1)-round from OWF

  41. Recall, meta-reduction approach N=pq C RA p,q 1. Design a particular attacker A that breaks scheme 2. Show how to “indistinguishably” emulate attacker in poly-time. Since we are tired by now: Assume R only makes a single query (the approach we describe extends to many queries.)

  42. Proof Idea: NM Commitments C1(v) C0(v) A C N=pq R p,q • Strong attacker A : • Receives C0(v) • Breaks it in Exp Time • Returns C1(v) • Emulator A’ : • Receives C0(v) • Does not breaks • Returns C1(0)

  43. Proof Idea: NM Commitments C1(0) C0(v) A’ C N=pq R p,q • Strong attacker A : • Receives C0(v) • Breaks it in Exp Time • Returns C1(v) • Emulator A’ : • Receives C0(v) • Does not breaks • Returns C1(0)

  44. Proof Idea: NM Commitments C1(0) C0(v) A’ C N=pq R p,q • Strong attacker A : • Receives C0(v) • Breaks it in Exp Time • Returns C1(v) • Emulator A’ : • Receives C0(v) • Does not breaks • Returns C1(0) By indistinguishability of C1(v) and C1(0), R still factors N! Rules out even “one-sided” non-malleability [LLMRS’01] But: only works as long as R is PPT Needed:[LLMRS’01] provide a construction based on subexp OWP!

  45. Proof Idea: NM Commitments C1(0) C0(v) A’ C N=pq R p,q • Strong attacker A : • Receives C0(v) • Breaks it in Exp Time • Returns C1(v) • Emulator A’ : • Receives C0(v) • Does not breaks • Returns C1(0) Can we rule out subexp R for “full-fledged” (two-sided) NM? Idea: 1. Either RA’ factors N (in which case we are done) 2. Or R can “distinguish” C1(1) and C1(0) C0(v’) C1(v) Use R to construct successfulattacker A’’ A’’

  46. Proof Idea: NM Commitments C1(0) C0(v) A’ C N=pq R p,q • Strong attacker A : • Receives C0(v) • Breaks it in Exp Time • Returns C1(v) • Emulator A’ : • Receives C0(v) • Does not breaks • Returns C1(0) Can we rule out subexp R for “full-fledged” (two-sided) NM? In essence, either RA’ orRR factors N!

  47. NIZK for NP [Blum Feldman Micali’88] ρ = 0101011101010101011001101 Prover Verifier X, π ThankyouAlice, I believe X is true. But I don’tknowwhy! X= G is 3-colorable • Statistical of Perfect NIZK for NP? • [GOS’06]: Perfect NIZK for NP; only sound for static statements • Adaptive statements? • partial positive result : based on “Knowledge of Exponent” [AF’07] • partial negative result: ruling out “direct BB reductions” [AF’07] • Easy in the RO model. • Static statements[BFM’88]: X is chosen independently of ρ • Adaptive chosen statements [FLS’90]: X is chosen as a function of ρ • [BFM] based on QR, [FLS’90,BY’96] based on TDP, …

  48. Proof Idea: Perfect NIZK in URS model ρ x, π A C N=pq R p,q • Emulator A’ : • Receives ρ • Pick random truex,w • Let π = P(1n,x,w) • Strong attacker A : • Receives ρ • Find r s.t.Sr(1n) = ρ[Exp Time] • Pick random false x • Let π = Sr(1n,x) Why is A a valid attacker? Why is the emulation good? Idea (similar to AF’07,GW’11): If L is hard on average, neither V nor R can tell apart whether we use a true or false statement. Problem: A is Exp time! Rely on non-uniform hardness of L

  49. Thm 1: Assume non-uniformly secure OWF. If there exists a PPT reduction R for basing soundness of a statistical NIZK for NPon the hardness of an efficient-challenger assumption (C,t), then there exists a PPT attacker B that breaks (C,t). Thm 2:If there exists a PPT reduction R for basing non-malleability of a 2-round commitment on the hardness of an efficient-challenger assumption (C,t), then there exists a PPT attacker B that breaks (C,t).

  50. In Sum Security of several “classic” cryptographic tasks/schemes---which are believed to be secure---cannot be proven secure (using BB reductions) based on “standard” intractability assumptions. Only dealt with uniform (PPT) reductions[CMLP13] extends these results also to nuPPT reductions

More Related