1.01k likes | 1.03k Vues
Lattices. Definition and Related Problems. Lattices. Definition (lattice): Given a basis v 1 ,..,v n R n , The lattice L=L(v 1 ,..,v n ) is. Illustration - A lattice in R 2. Each point corresponds to a vector in the lattice. “Recipe”:
E N D
Lattices Definition and Related Problems
Lattices Definition (lattice): Given a basis v1,..,vnRn, The latticeL=L(v1,..,vn)is
Illustration - A lattice in R2 Each point corresponds to a vector in the lattice “Recipe”: 1. Take two linearly independent vectors in R2. 2. Close them for addition and for multiplication by an integer scalar. etc. ... etc. ...
Shortest Vector Problem SVP (Shortest Vector Problem): Given a lattice L find s 0 L s.t. for any x 0 L || x || || s ||.
The Shortest Vector - Examples What’s the shortest vector in the lattice spanned by the two given vectors?
Closest Vector Problem CVP (Closet Vector Problem): Given a lattice L and a vector yRn, find a vL, s.t. || y - v || is minimal. Which lattice vector is closest to the marked vector?
Lattice Approximation Problems • g-Approximation version: Find a vector y s.t. ||y||< g shortest(L) • g-Gap version:Given L,and a number d, distinguish between • The ‘yes’ instances( shortest(L) d ) • The ‘no’ instances ( shortest(L) > gd ) shortest If g-Gap problem is NP-hard, then having a g-approximation polynomial algorithm --> P=NP.
Lattice Approximation Problems • g-Approximation version: Find a vector y s.t. ||y||< g shortest(L) • g-Gap version:Given L,and a number d, distinguish between • The ‘yes’ instances( shortest(L) d ) • The ‘no’ instances ( shortest(L) > gd ) shortest If g-Gap problem is NP-hard, then having a g-approximation polynomial algorithm --> P=NP.
Lattice Problems - Brief History • [Dirichlet, Minkowski] no CVP algorithms… • [LLL] Approximation algorithm for SVP,factor 2n/2 • [Babai] Extension to CVP • [Schnorr] Improved factor,2n/lg nfor both CVP and SVP • [vEB]: CVP is NP-hard • [ABSS]: Approximating CVP is • NP hard to within any constant • Almost NP hard to within an almost polynomial factor.
Lattice Problems - Recent History • [Ajtai96]: worst-case/average-case reduction for SVP. • [Ajtai-Dwork96]: Cryptosystem. • [Ajtai97]:SVP isNP-hard (for randomized reductions). • [Micc98]:SVP is NP-hardto approximate to within some constant factor. • [DKRS]: CVP is NP hard to within an almost polynomial factor. • [LLS]: Approximating CVP to within n1.5 is in coNP. • [GG]: ApproximatingSVP and CVP to within n is in coAMNP.
Ohh... but isn’t that just an annoying technicality?... Why is SVP not the same as CVP with y=0? shortest y closest CVP/SVP - which is easier? Reminder: • Definition (Lattice): Given a basis v1,..,vnRn, The lattice L=L(v1,..,vk)is{aivi | aiintegers} • SVP (Shortest Vector Problem): Find the shortest non-zero vector in L. • CVP (Closest Vector Problem): Given a vector yRn, find a vLclosest to y.
Trying to Reduce SVP to CVP ...But this will also yield s=0... #1 try: y=0 Note that we can similarly try: #1 try: y=0 c=0 SVP B (c1-1,c2,...,cn) BSVP c s c e1 CVP 0 b1 y 0 Finds (c1,...,cn)Znwhich minimizes || c1b1+...+cnbn- y || b1 0 Finds (c1,...,cn)0Znwhich minimizes || c1b1+...+cnbn ||
b2 shortest: b2-2b1 b1 Geometrical Intuition The obvious reduction: the shortest vector is the difference between (say) b2 and the lattice vector closest to b2 (not b2!!) Thus we would like to somehow “extract” b2 from the lattice, so the oracle for CVP will be forced to find the non-trivial vector closest to b2. The closest to b2, besides b2 itself. The lattice L ...This is not as simple as it sounds...
Trying to Reduce SVP to CVP That’s not really a problem! Since one of the coefficients of the SV must be odd (Why?), we can do this process for all the vectors in the basis and take the shortest result! The trick: replace b1 with 2b1 in the basis Since c1Z, s0 SVP B B(1) (2c1 (c1-1,c2,...,cn) BSVP s c CVP c 0 c1 b1 y 0 But in this way we only discover the shortest vector among those with odd coefficients for b1 Finds (c1,...,cn)Znwhich minimizes || c1b1+...+cnbn- y || b1 || c12b1+ 0 Finds (c1,...,cn)0Znwhich minimizes || c1b1+...+cnbn ||
The lattice L’’ L The lattice L’ L L’’=span (2b1,b2) Geometrical Intuition By doubling a vector in the basis, we extract it from the lattice without changing the lattice “too much” The closest to b2 in the original lattice. Also in the new lattice. The closest to b1 in the original lattice, lost in the new lattice L’=span (b1,2b2) But we risk losing the closest point in the process. It’s a calculated risk though: one of the closest points has to survive...
The Reduction of g-SVP to g-CVP Input: A pair (B,d), B=(b1,..,bn) and dR for j=1 to n do invoke the CVP oracle on(B(j),bj,d) Output: The OR of all oracle replies. Where B(j) = (b1,..,bj-1,2bj,bj+1,..,bn)
Hardness of SVP & applications • Finding and even approximating the shortest vector is hard. • Next we will see how this fact can be exploited for cryptography. • We start by explaining the general frame of work: a well known cryptographic method called public-key cryptosystem.
The brave spy wants to send the HQ a secret message In the HQ a brand new lock was developed for such cases THEYcan see and duplicate whatever transformed ...But the enemy is in between... The enemy will attack within a week The solution: HQ->spy HQ<--Spy Public-Key Cryptosystems and brave spies... The spy can easily lock it without the key And now the locked message can be sent to the HQ without fear, And read only there.
Public-Key Cryptosystem (76) Requirements: Two poly-time computable functions Encr and Decr, s.t: 1. x Decr(Encr(x))=x 2. Given Encr(x) only, it is hard to find x. Usage Make Encr public so anyone can send you messages, keep Decr private.
The Dual Lattice L* = { y | x L: yx Z} Give a basis {v1, .., vn} for L one can construct, in poly-time, a basis {u1,…,un}: ui vj =0 ( i j) ui vi =1 In other wordsU = (Vt)-1where U = u1,…,un V = v1, .., vn
distance = 1/||S|| Shortest Vector - Hidden Hyperplane Observation: the shortest vector induces distinct layers in the dual lattice. -s H0 = {y| ys = 0} H1 = {y| ys = 1} Hk = {y| ys = k} s – shortest vector H – hidden hyperplane
Encoding 0 Encoding 1 s s (1) Choose a random lattice point Choose a random point (2) Perturb it Encrypting Given the lattice L, the encryption is polynomial: s – shortest vector H – hidden hyperplane
Decoding 0 Decoding 1 s s Decrypting Given s, decryption can be carried out in polynomial-time, otherwise it is hard If the projection is close to one of the hyperplanes If the projection of the point is not close to any of the hyperplanes s – shortest vector H – hidden hyperplane
GG • Approximating SVP and CVP to within n is in NP coAMHence if these problem are shown NP-hard the polynomial-time hierarchy collapses
Poly-time approximation NPco-AM The World According to Lattices DKRS GG Ajtai-Micciancio L3 CVP SVP 1+1/n 1 O(1) O(logn) 2 n1/lglgn n 2n/lgn NP-hardness
Poly-time approximation NPco-AM OPEN PROBLEMS Is g-SVP NP-hard to within n ? For super-polynomial, sub-exponential factors; is it a class of its own? Can LLL be improved? CVP SVP 1+1/n 1 O(1) O(logn) 2 n1/lglgn n 2n/lgn NP-hardness
Approximating SVP in Poly-Time The LLL Algorithm
What’s coming up next? • To within what factor can SVP be approximated? • In this chapter we describe a polynomial time algorithm for approximating SVP to factor 2(n-1)/2. • We would later see that approximating the shortest vector to within 2/(1+2) for some >0 is NP-hard.
The Fundamental Insight(?) Assume an orthogonal basis for a lattice. The shortest vector in this lattice is
Illustration x=2v1+v2 ||x||>||2v1|| x v2 ||x||>||v2|| v1
The Fundamental Insight(!) Assume an orthogonal basis for a lattice. The shortest vector in this lattice is the shortest basis vector
Why? • If a1,...,akZ and v1,...,vk are orthogonal, then ||a1v1+...+akvk||2 = a12•||v1||2+...+ak2•||vk||2 • Therefore if vi is the shortest basis vector, and there exits an 1 i n s.t ai 0, then ||a1v1+...+akvk||2 ||vi||2(a12+...+ak2) ||vi||2 • No non-zero lattice vector is longer than vi
basis for a sub-space in Rn orthogonal basis for the same sub-space in Rn v1 .. vk v1* .. vk* What if we don’t get an orthogonal basis? take a vector and subtract its projections on each one of the vectors already taken Remember the good old Gram-Schmidt procedure: Gram-Schmidt
Projections Computing the projection of v on u (denoted w): v u w
Formally: The Gram-Schmidt Procedure • Input: a basis {v1,...,vk} of some subspace in Rn. • Output: an orthogonal basis {v1*,...,vk*}, s.t for every 1ik,span({v1,...,vi}) = span({v1*,...,vi*}) • Process:The procedure starts with {v1*=v1}. • Each iteration (1<ik) adds a vector, which is orthogonal to the subspace already spanned:
v3 v3* The projection of v3 on v2* v1* v2* The sub-space spanned by v1*,v2* Example v3v1 only to simplify the presentation
basis for a sub-space in Rn orthogonal basis for the same sub-space in Rn v1 .. vk v1* .. vk* Example: the projection of v2 on v1 is 1.35v1. v1 and v2-1.35v1 don’t span the same lattice as v1 and v2. v2 v1 Wishful Thinking Unfortunately, the basis Gram-Schmidt constructs doesn’t necessarily span the same lattice Gram-Schmidt lattice lattice As a matter of fact, not every lattice even has an orthogonal basis...
Nevertheless... • Invoking Gram-Schmidt on a lattice basis produces a lower-bound on the length of the shortest vector in this lattice.
Lower-Bound on the Length of the Shortest Vector Claim:Let vi* be the shortest vector in the basis constructed by Gram-Schmidt. Forany non-zero lattice vector x: ||vi*|| ||x|| Proof: jm, the projection of vj on vm* is 0. The projection of vm on vm* is vm*. There exist z1,...,zkZ, r1,...,rkR, such that rm=zm, Let m be the largest index for which zm0 and thus ||x|| rm||vm*|| = zm||vm*|| ||vi*||
Compromise • Still we’ll have to settle for less than an orthogonal basis: • We’ll construct reduced basis. • Reduced basis are composed of “almost” orthogonal and relatively short vectors. • They will therefore suffice for our purpose.
ij Reduced Basis Definition (reduced basis): A basis {v1,…,vn} of a lattice is called reduced if: (1) 1 j < i n The projection of vi+1 on {vi*,...,vn*} (2) 1 i < n ¾||vi*||2 ||vi+1*+i+1,ivi*||2 The projection of vi on {vi*,...,vn*}
|| ||2 + || ||2 || ||2 + || ||2 Properties of Reduced Basis(1) Claim: If a basis {v1,...,vn} is reduced, then for every 1i<n ½||vi*||2 ||vi+1*||2 Proof: Since {v1,...,vn}is reduced Since vi* and vi+1* are orthogonal Since |<vi+1,vi*>/<vi*,vi*>|½ ¾|| ||2 || ||2 And the claim follows. Corollary: By induction on i-j, for all ij, (½)i-j||vj*||2||vi*||2
Properties of Reduced Basis(2) Claim: If a basis {v1,...,vn} is reduced, then for every 1jin (½)i-1 ||vj||2 ||vi*||2 Proof: Some arithmetics... Since {v1*,...,vn*} is an orthogonal basis Rearranging the terms Since |<vi+1,vi*>/<vi*,vi*>|½ and 1kj-1||vk*||2 (½)k-j ||vj*||2 Geometric sum Which implies that By the previous corollary And this is in fact what we wanted to prove.
Approximation for SVP The previous claim together with the lower bound min||vi*|| on the length of the shortest vector result: The length of the first vector of any reduced basis provides us with at least a 2(n-1)/2 approximation for the length of the shortest vector. It now remains to show that any basis can be reduced in polynomial time.
Reduced Basis • Recall the definition of reduced basis is composed of two requirements: • 1 j < i n |ij| ½ • 1 i < n¾||vi*||2 ||vi+1*+i+1,ivi*||2 • We introduce two types of “lattice-preserving” transformations: reduction and swap, which will allow us to reduce any given basis.
vk vk-kl•vl 1in vi vi 1in vi*vi* 1j<l kj kj - kl•lj kl kl - kl 1j<in ij ij ik First Transformation: Reduction The transformation ( for 1 l < k n ) The consequences Using this transformation we can ensure for all j<i |ij| ½
vk vk+1 vk+1 vk vk*vk+1*+k+1,kvk* 1in vi vi Second Transformation: Swap The transformation ( for 1 k n ) The important consequence If we use this transformation, for a k which satisfies ¾||vk*||2 > ||vk+1*+k+1,kvk*||2, we manage to reduce the value of ||vk*||2 by (at most)¾.