1 / 59

Lecturer: Moni Naor Weizmann Institute of Science

On necessary and sufficient cryptographic assumptions: the case of memory checking Lecture 4 : Lower bound on Memory Checking,. Lecturer: Moni Naor Weizmann Institute of Science. Web site of lectures : www.wisdom.weizmann.ac.il/~naor/COURSE/ens.html. Recap of Lecture 3.

summer
Télécharger la présentation

Lecturer: Moni Naor Weizmann Institute of Science

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On necessary and sufficient cryptographic assumptions: the case of memory checkingLecture 4 : Lower bound onMemory Checking, Lecturer:Moni Naor Weizmann Institute of Science Web site of lectures: www.wisdom.weizmann.ac.il/~naor/COURSE/ens.html

  2. Recap of Lecture 3 • The memory checking problem • The online vs. offline versions • The relationship the Sub-linear authentication problem • A good offline protocol • based on hash function that can be computed on the fly • Small biased probability space • Hash functions for set equality • A good computational solution for the online problem, assuming one-way functions • Two solutions, both tree based • Using Pseudo-random tags • Using families of UOWHF • Small memory need be only reliable • The Consecutive Message Protocol model • Tight (sqrt(n)) bound for equality • t(n) ¢ s(n) is (n) • Similar to the simultaneous model • But: sublinear protocols exist iff one-way functions exist

  3. The lecture • Learning Distributions • Static and adaptive case • Lower bounds on memory checking • Existence of sublinear protocols implies one-way functions

  4. Learning Distributions We are Given many samples from a distribution D w1, w2, … wm • Would like to `learn’ D • What does that mean? • Large parts of statistics are devoted to this question… • In Computational Learning Theory two notions exist: • Learn by generation • should come up with a probabilistic circuit • The output has distribution D provided the inputs are random • Approximation possible • Learn by evaluation • Given x can compute (approximate) PrD[x] Distributed ~ D

  5. Learning Distributions • Suppose D is determined by a string k of s `secret’ bits • Everything else is known If one-way functions exist: there are circuits C where it is computationally hard to learn the output distribution LetFk be a pseudo-random function C’s output is x ◦ FK(x) for a random x k C

  6. Learning Adaptively Changing Distributions Learning to predict and imitate the distribution of probabilistic, adaptively changing processes. E.g.: the T-1000 Robot can: “imitate anything it touches … anything it samples”

  7. Examples of adaptively changing distributions • Impersonation • Alice and Bob agree on a secret and engage in a sequence of identification sessions. • Eve want to learn to imitate one (or both) of the parties • How long should she wait • Classification of a changing bacteria • How long must the scientist observe before making the classification • Catching up: Sam and Sally are listening to a visiting professor’s lecture. Sam falls asleep for a while • How long would it take for Sam to catch up with Sally

  8. Learning Adaptively Changing Distributions What happens if the generating circuit C changes in time and as a reaction to events and the environment • Secret state • Public state • Transition function D: Sp x Ss x R  Sp x Ss Size of secret and public state are not restricted But size of initial secret is restricted to s bits. How long would it take us to learn the distribution the nextpublic state given the sequence of past public states First answer: may be impossible to learn Example: the next public state may be the current secret state The current secret state chosen at random So we want to be competitive with a party that knows the initial secret state Secret state chosen at random

  9. Definition of Learning an Adaptively Changing Distribution Let D be an adaptively changing distribution (ACD) D: Sp x Ss x R  Sp x Ss Then given public states P0, P1, ... Pk and the initial secret s0 there is the induced distribution Dk on the next public state Definition: A learning algorithm (,)-learns the ACD if • It always halts and outputs an hypothesis h • With probability and least 1- we have that (Dk, h) · probability is over the random secret and the randomness in the evolution of the state

  10. Algorithm for learning ACDs Theorem: For any ε, δ > 0, for any ACD there is an algorithm that activates the system for O(s) rounds (,)-learns the ACD Repeat until success (or give up) If there is a very high weight subset of initial secret states A whose distributions are close: Close = distance less than ε High weight = 1- /2 Then can pick any h 2 A Else activate the ACD and obtain the next public state Claim: if the algorithm terminates in the loop, then with probability at least 1- /2 (Dk, h) · Conditioned on the public states seen so far

  11. Analysis • Main parameter for showing that the algorithm advances: Entropy of the initial secret • Key lemma: If the high weight condition does not hold, then the expected entropy drop of the initial secret is high • At least 2/ • After O(s) iterations not much entropy left The (Shannon) entropy of X is H(X) = - ∑ x ΓPx (x) log Px (x) Constant depends on  and 

  12. Efficiency of the Algorithm • Would like to be able to learn all ACD where D is an efficiently computable function Theorem: One-way functions exist iff there is an efficiently computable ACD D and some ε, δ > 0, for which it is (ε, δ)-hard to learn D

  13. Connection to memory checking and authentication: learning the access pattern distribution Corollary from ACD Learning Theorem: For any ε, δ > 0, for any x, when: • E is activated on x, secret output sx • Adversary activates V at most O(s(n)/ε2δ2) times • Adversary learns secret encoding sL. px: The final public encoding reached. Dp(s): Access pattern distribution in the next activation on public p Randomness over activations of E, V. Guarantee: With probability at least 1–δ, the distributions Dpx(sx) and Dpx(sL) are ε -close (statistically).

  14. t(n)bits s(n)bits Memory CheckersHow to check a large and unreliable memory • Store and retrieve requests to large adversarial memory • a vector in {0,1}n • Detects whether the answer to any retrieve was different than the laststore • Uses small, secret, reliable memory: space complexitys(n) • Makes its own store and retrieve requests:query complexityt(n) P public memory C memory checker U user S secret memory

  15. Computational assumptions and memory checkers • For offline memory checkers no computational assumptions are needed: Probability of detecting errors: ε Query complexity: t(n)=O(1) (amortized) Space complexity: s(n)=O(log n + log 1/ε) • For online memory checkerswith computational assumptions, good schemes are known: Query complexity t(n)=O(log n) Space complexity s(n)=n (for any  > 0) Probability of not detecting errors: negligible Main result: computational assumptions are necessary for sublinear schemes

  16. Recall: Memory Checker  Authenticator If there exists an online memory checker with • space complexity s(n) • query complexity t(n) then there exists an authenticator with • space complexity O(s(n)) • query complexity O(t(n)) Strategy in showing lower bound for memory checking: show it on authenticator

  17. The Lower Bound Theorem 1 [Tight lower bound]: For any onlineMemory Checker (Authenticator)secure against a computational unbounded adversary s(n) x t(n) is(n)

  18. Memory Checkers and One-Way Functions Breaking the lower bound implies one-way functions Theorem 2: If there exists an online memory checker (authenticator) • Working in polynomial time • Secure against polynomial time adversaries • With query and space complexity:s(n) x t(n) < c · n (for a specific c > 0) then there exist functions that are hard to invert for infinitely many input lengths (“almost one-way” functions)

  19. Program for showing the lower bound: • Prove lower bound: • First a simple case • By a reduction to the consecutive message model • Then the generalized reduction

  20. x {0,1}n ALICE mA f(x,y) CAROL mB BOB y {0,1}n Simultaneous Messages Protocols • For the equality function: • |mA| x |mB| = (n)[Babai Kimmel 1997]

  21. ALICE BOB mA x {0,1}n f(x,y) mP CAROL y {0,1}n mB Consecutive Messages Protocols rp Theorem For any CM protocol that computes the equality function, If |mP| ≤ n/100then |mA| x |mB| = (n) s(n) t(n)

  22. The Reduction Idea: Use an authenticator to construct a CM protocol for equality testing

  23. s(n)bits reject accept V y secret encoding sx t(n)bits x x {0,1}n public encoding py public encoding px Recall: AuthenticatorsHow to authenticate a large and unreliable memory with a small and secret memory E D • sx=Esecret(x, r) • px= Epublic(x,r)

  24. A Simple(r) Construction Simplifying Assumption: V chooses which indices of the public encoding to access independently of the secret encoding In particular: adversary knows the access pattern distribution

  25. ALICE x {0,1}n sx BOB reject accept y secret encoding sx x x {0,1}n CAROL public encoding py public encoding px bits reject accept x {0,1}n y {0,1}n V E D

  26. To show it works Must show • When x=y then the CM protocol accepts • Otherwise the authenticator will reject when no changes were made • How to translate • an adversary to the CM protocol that makes it accept when x≠y • into an adversary that cheats the verifier

  27. Why It Works (1) Claim: If (E,D,V) is an authenticator then the CM protocol is good. Correctness when x=y: Alice and Bob should use same public encoding of x. To do this, use rpub use it as the randomness for the encoding

  28. Why It Works (2) Security: suppose adversary for CM protocol breaks its Makes Carol accept when x≠y Want to show: can break the authentication as well • Tricky: “CM adversary” sees rpub! • Might leek information since sx is chosen as Esecret(x, rpub) • Solution:For sx Alice selects different realrandomness giving the same public encoding! • Choose r’ 2R Epublic-1(x, rpub) • Let sx = Esecret(x, r’) • Exactly the same information is available to the authenticator adversary in a regular execution • The public encoding px = Epublic(x, r) • Hence: probability of cheating is the same Conclusion: s(n) x t(n) is(n) Rerandomizing

  29. The Rerandomizing Technique Always choose `at random’ the random coins consistent with the information you have

  30. Why it doesn’t work always • What if the verifier uses the secret encoding to determine its access pattern distribution? • The simple lower bound applies tor “one-time” authenticators. • Where the adversary sees only a single verification • Is this true without simplifying assumption?

  31. secret encoding accept x {0,1}n x public encoding E(x) “One-Time Authenticators” V • Space complexity: O(log(n)), Query Complexity: O(1) • Lesson: use the fact that V is secure when run many times. E D

  32. Progess: • Prove lower bounds: • First a simple case • Then the generalized reduction • A discussion on one-way functions

  33. Authenticators: Access Pattern Access Pattern: Indices accessed by V and bit values read. Access Pattern Distribution: distribution of the access pattern in V’s next activation, given: • E’s initial secret string • Past access patterns Randomness: over V’s coin tosses in all its activations. Want to be able to state: The adversary knows the access pattern distribution Even though he can’t see E’s secret output.

  34. Access Pattern Distribution reject accept x {0,1}n x x x y given randomness secret sx secret secret E V V public public public public py D D D

  35. Learning the Access Pattern Distribution • Important Lesson: if adversary doesn’t know the access pattern distribution, then V is “home free”. • In “one-time” example • V exposes the secret indices! • Lesson: • activate V many times, “learn” its distribution! Recall: learning adaptively changing distributions.

  36. Connection to memory checking and authentication: learning the access pattern distribution Corollary from ACD Learning Theorem: For any ε, δ > 0, for any x, when: • E is activated on x, secret output sx • Adversary activates V at most O(s(n)/ε2δ2) times • Adversary learns secret encoding sL. px: The final public encoding reached. Dp(s): Access pattern distribution in the next activation on public p Randomness over activations of E, V. Guarantee: With probability at least 1–δ, the distributions Dpx(sx) and Dpx(sL) are ε -close (statistically).

  37. ALICE x {0,1}n sx a, sL rpub BOB accept secret encoding sx x x {0,1}n CAROL sL public encoding px bits accept x {0,1}n V E D

  38. Sampling by sL, simulating by sx Access pattern distributions by sL and sx are ε-close: Bob generates access patterna using sL Carol selects a random string r from those that give a on secret input sx • Rerandomziation Simulate V using the random string r Claim: the distribution of r is ε-close to uniform

  39. Does it Work? Security? The adversary sees sL! Not a problem: could have learned this on its own What about y≠x?

  40. Recap (1) Adversary wants to know access pattern distribution • Can learn access pattern distribution • Saw protocol that accepts when y=x • What about y≠x?

  41. ALICE x {0,1}n sx a, sL rpub BOB ? y secret encoding sx x x {0,1}n CAROL public encoding py sL public encoding px bits ? y {0,1}n V E D

  42. Does it Work? (2) • Will this also work when y≠x? • No! Big problem for the adversary: • it can learn access pattern distribution on correct and unmodified public encoding… • really wants the distribution on different modified encoding! • Distributions by sx and sL may be: • very close on unmodified encoding (px) • very far on any other (e.g. py) • Can’t hope to learn distribution on modified public encoding • Not enough information/iterations

  43. Back to The Terminator: TERMINATOR: What's the dog's name? JOHN:Max. TERMINATOR: Hey, Janelle, what's wrong with Wolfy? I can hear him barking. Is he okay? T-1000 impersonating Janelle:Wolfy's fine, honey. Where are you? Terminator hangs up: Your foster parents are dead. Let's go.

  44. Recap (2) Adversary wants to know access pattern distribution • Can learn access pattern distribution • Saw protocol that accepts when y=x • What about y≠x? • Big problem: can’t “learn” the access pattern distribution in this case!

  45. Bait and Switch (1) • Intuition: If Carol knows sx and sL, and they give different distributions, then she can reject. • Concrete idea: Bob always uses sL to determine the access pattern, • Carol will check whether the distributions are close or far. • This is a “weakening” of the verifier. We need to show it is still secure.

  46. Bait and Switch (2) • Give Carol access to sxand to sLAlso give her the previous access patterns (a) • Bob got public encoding p • Recall Dp(sx) and Dp(sL): • Access pattern distributions on public encoding p with sx and sL as initial private encodings

  47. Access Pattern Distribution x {0,1}n x x given randomness secret sx secret secret E V V public public public public p D D

  48. Bait and Switch (3) • If only Carol could compute Dp(sx) and Dp(sL)… Check whether they are ε-close: If far, then p cannot be the “real” public encoding! Reject If they are close, then: • use sL to determine access pattern • simulate V with sx and that access pattern

  49. Bait and Switch (4) • Last problem: • V’ cannot compute the distributions Dp(sx) and Dp(sL) without reading all of p (V may be adaptive). • Observation: • V’ can compute the probability of any access pattern for which all the bits read from p are known. • Solution: • Sample O(1) access patterns by Dp(sL), use them to approximate the distance between the distributions. The only type of operation we have that is not random inverse

  50. ALICE x {0,1}n sx accept close a, sL rpub BOB reject reject accept far y secret encoding sx x x {0,1}n CAROL public encoding py sL public encoding px bits reject x {0,1}n y {0,1}n V E D

More Related