1 / 16

Polynomial Reachability in Modular Arithmetic

Polynomial Reachability in Modular Arithmetic. Arturo Rosas, Chris Jones. Rings. In linear algebra, a ring is a nonempty set in which the following are true: Addition and multiplication are defined Associativity , commutativity , and distributivity The set contains an additive identity

zaria
Télécharger la présentation

Polynomial Reachability in Modular Arithmetic

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Polynomial Reachability in Modular Arithmetic Arturo Rosas, Chris Jones

  2. Rings • In linear algebra, a ring is a nonempty set in which the following are true: • Addition and multiplication are defined • Associativity,commutativity, and distributivity • The set contains an additive identity • Each element in the set has an additive inverse which is also in the set

  3. Modular Arithmetic and Zn • Zn: a ring whose set is the integers [0,n-1] • Ex: Z4 contains {0, 1, 2, 3} • Addition is defined as (a + b) mod n • Multiplication is defined as (ab) mod n

  4. Functions and Polynomials • f: Zn → Zn • Maps the elements of Zn to another element in Zn (possibly itself) • Example in Z4: f(0)=0, f(1)=1, f(2)=0, f(3)=1 • nn functions from Zn to itself, but only some can be represented by polynomials of the form f(x) = a0 + a1x + a2x2 + … + an-1xn-1 • The question (and objective) is how many? And which ones?

  5. Applications in Computer Science • Cryptography • One major application is cryptanalysis • Can help determine weaknesses in keys

  6. Attacks on RSA • Johan Hastad • Solving simultaneous modular equations of low degree • Coppersmith's Attack • Related Hastad’s work • Where do we come in? • Polynomial Collisions • Lower number of distinct polynomials for given n

  7. Methodology • Vandermonde Matrix • Defined as follows • Useful in Interpolation problems

  8. Slow Method • For every augment every value of f with the Vandermonde matrix as such: • Then use Gaussian Elimination to interpolate the coefficients of f(x)

  9. Faster Method • Slow method uses A LOT of repeated operations • How can we exploit this? • Use Gaussian Elimination once • Generate a linear transformation τduring Gaussian Elimination such that: • Apply this transformation function to each column vector of f(x) values

  10. How to parallelize? • Allocate each column vector of the Vandermonde matrix to a processee (be sure to use striping as in regular Gaussian Elimination):

  11. How to parallelize? • After each successive row operation update tau. • K = total number of row operations to reduce the Vandermonde matrix • Once the Vandermonde matrix is in reduced form, we can interpolate each function in parallel

  12. How to parallelize? • Once the Vandermonde matrix is in reduced form, we can interpolate each function in parallel. • Each processee will have a row vector of tau • From there each processee will compute their respective ‘s and after a simple MPI_Gather call can collect each value and reconstruct

  13. Time complexities (sequential) • Sequential Gaussian elimination for each possible function (slow method): O(n3) * number of functions to check (nn for all of Zn) • Improved sequential algorithm: • Gaussian elimination with matrix multiplication at each step: O(n4) • Substitution w/ matrix multiplication at each step: O(n3) • Matrix-vector multiplication and solving for coefficients: O(n2) for each function • Total: O(n4) + O(n3) + O(n2) * number of functions to check (nn for all of Zn) = O(n2) * number of functions to check (nn for all of Zn)

  14. Parallel Time when P = n • Parallel elimination w/ parallel matrix multiplication at each step: O(n3) • Parallel substitution w/ parallel matrix multiplication at each step: O(n3) • Parallel matrix-vector multiplication and solving for coefficients: O(n) for each function • Total: O(n3) + O(n3) + O(n) * number of functions to check (nn for all of Zn) = O(n) * number of functions to check (nn for all of Zn)

  15. Speedup and Efficiency • Speedup = Tseq/Tpar = O(n2) * nn/ O(n) * nn = O(n2) / O(n) = n → linear • Efficiency = Speedup / P = n/P = n/n = n • This leads to the belief that it is cost optimal

  16. Works Cited • Hastad, Johan. "Solving Simultaneous Modular Equations of Low Degree." SIAM Journal of Computing 17 (1988): 336-41. Print. • Capretta, Venanzio. "Mathematics for Computer Scientists." Web. <http://www.cs.nott.ac.uk/~vxc/g51mcs/ch06_combinatorics.pdf>.

More Related