1 / 46

Elementary Number Theory

Elementary Number Theory. Given positive integers a and b , we use the notation a¦b to indicated that a divides b , i.e., b is a multiple of a If a|b then there is an integer k , s.t., b=a·k . The following properties follows Thm: Let a, b & c > 0 be integers, then

rish
Télécharger la présentation

Elementary Number Theory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Elementary Number Theory • Given positive integers a and b, we use the notation a¦b to indicated that adividesb, i.e., b is a multiple of a • If a|b then there is an integer k, s.t., b=a·k. The following properties follows • Thm: Let a, b & c > 0 be integers, then • if a¦b and b¦c, then a¦c • if a¦b and a¦c, then a¦(i·b+j·c), for all i & j • if a¦b and b¦a, then a=b Complexity of Algorithms

  2. Prime and Composite Numbers • An integer p is said to be a prime if p  2 and its only divisors are the trivial divisors 1 and p • An integer greater than 2 that is not a prime is said to be composite. • Example: • 2, 5, 11, 101, 98711 are prime, but • 25 and 10403 (= 101 · 103) are composite Complexity of Algorithms

  3. Fund. Theorem Of Arithmetic • Thm: Let n > 1 be an integer. Then there is a unique set of prime numbers {p1, …, pk} and positive integer exponents {e1, …, ek}, s.t., • The product is known as the prime decomposition of n. Complexity of Algorithms

  4. Greatest Common Divisor (GCD) • The greatest common divisor of positive integers a and b, denoted gcd(a,b), is the largest integer that divides botha and b. • If gcd(a,b)=1 we say that a and b are relatively prime • The notion of GCD can be extended: • gcd(a,0) = gcd(0,a) = a • gcd(a,b) = gcd(¦a¦,¦b¦), used when a or b is negative Complexity of Algorithms

  5. Modulo Operator • A modulo operator, denoted by a mod n, defines a reminder r of a, when divided by n, i.e., r = a mod n. • I.e., r = a - a/n· n, and in other words, there is some integer q, s.t., a = q · n + r, where r < n. Complexity of Algorithms

  6. GCD property • Thm: Let a and b be two positive integers, s.t., a  b, then • gcd(a,b) = gcd (b, a mod b) • Proof: • Let d= gcd(a,b). • r= a mod b implies a= q·b + r, for some integer q • d¦r, since r= (a - q·b), and d¦a and d¦q·b • gcd(b,r)  d. Otherwise d’¦b and d’¦r, and d’¦a (since a= q·b + r), for some d’>d. Contradiction. Complexity of Algorithms

  7. Euclid’s GCD Algorithm s.t., a  b Complexity of Algorithms

  8. 412 260 152 108 44 20 4 260 152 108 44 20 4 0 Euclid’s GCD Algorithm • Algorithm EuclidGCD(a,b): • Input: Non-negative integers a > b • Output: gcd(a,b) • whileb  0do(a,b) := (b, a mod b)od • returna Complexity of Algorithms

  9. Euclid’s Algorithm (Complexity) • For i > 0, let ai be the first argument of ith recursive call (or iteration) of the algorithm EuclidGCD • We have ai+2 = aimod ai+1 • One can show that ai+2 < ½·ai • Thm: Let a>b be two positive integers. Euclid’s algorithm computes gcd(a,b) by executing O(log max(a,b)) arithmetic operations Complexity of Algorithms

  10. Cryptographic Computations • A variety of cryptographic techniques have been developed to support secure communications over insecure networks such as the Internet, • And these include: • Encryption/decryption transformations • Digital signatures Complexity of Algorithms

  11. Symmetric Encryption Schemes • Confidentiality during transmission can be achieved by encryption schemes, or ciphers, where • Plain-text message M is encrypted (before transmission) into an unrecognisable string of characters C, called cipher-text • After the cipher-text C is received it is transformed back to the plain-text M using decryption Complexity of Algorithms

  12. (xrjf%kj s*43s) What did she say? Bob Symmetric Encryption Schemes encryption decryption Message M Eve is very nosy… Cipher-text C Eve is very nosy Alice Message M Eve Complexity of Algorithms

  13. Secret Keys • In traditional cryptography, a common secret key k is shared by Alice and Bob • It is used to both encrypt and decrypt the message • Such schemes are called symmetric encryption schemes, since • k is used for both encryption and decryption and • The same secret key is shared by Alice and Bob Complexity of Algorithms

  14. Substitution Cypher • A classic example of a symmetric cipher is a substitution cipher, where the secret key is a permutation  of the characters of the alphabet • Encrypting plain-text M into cipher-text C consists of replacing each character x of M with character y = (x) • Decryption can be easily performed by knowing the permutation function . I.e., M is derived from C by replacing each character y of C with character x = -1(y). Complexity of Algorithms

  15. The Caesar cipher • The Caesar cipher is an early example of a substitution cipher, where • each character x is replaced by character y= (x + k) mod n, where • n is the size of the alphabet, and • 1 < k < n is the secret key. • This substitution scheme is known as the “Caesar cipher”, for Julius Caesar is known to have used it with k=3 Complexity of Algorithms

  16. Breaking Substitution Ciphers • Substitution ciphers are quite easy to use, but they are not secure • The secret key can be quickly inferred using frequency analysis, based on the knowledge of the frequency of the various letters, or groups of consecutive letters in the text language Complexity of Algorithms

  17. The One-Time Pad • Secure symmetric ciphers do exist! • In fact, the most secure cipher known is a symmetric cipher, and it is known as “the one-time pad” • In this cryptosystem, Alice and Bob each share a random bit string K as large as any message they might wish to communicate. • The string K is the symmetric key, for to compute a cipher-text C from a message M Complexity of Algorithms

  18. The One-Time Pad (encryption) • Initially Alice computes C = M  K, where  denotes the bit-wise exclusive-or operator • She then sends C to Bob using any reliable communication channel, even one on which Eve is eavesdropping • The communication is secure because the cipher-text C is computationally indistinguishable from a random string Complexity of Algorithms

  19. The One-Time Pad (decryption) • Bob can easily decrypt the cipher-text message C by computing C  K, since: • C  K = (M  K)  K = • M  (K  K) = M  0 = M, where • 0 denotes the bit string of all 0’s the same length as M • This scheme is clearly a symmetric cipher system, since the key K is used for both encryption and decryption Complexity of Algorithms

  20. The One-Time Pad (analysis) • Advantages: • computationally efficient, for bit-wise exclusive-or is very easy to compute • very secure • Disadvantages: • Alice and Bob must share a very large secret key • security depends on the fact that the secret key is used only once! • In practical cryptosystems we prefer secret keys that can be reused and that are smaller than the messages they encrypt and decrypt Complexity of Algorithms

  21. Public-Key Cryptosystems • A major problem with symmetric ciphers is key transfers, or how to distribute securely the secret key for encryption and decryption • Diffie and Hellman described an abstract system that overcomes this problem • I.e., the public-key cryptosystem Complexity of Algorithms

  22. Public-Key Cryptosystems • Given a message M, encryption function E, and decryption function D, the following holds: • D(E(M)) = M • Both E and D are easy to compute • It is computationally infeasible to derive D from E • E(D(M)) = M Complexity of Algorithms

  23. Public-Key Cryptosystems • The third property means that E only goes in one direction, i.e., • It is computationally infeasible to invert E, unless you already know D • Thus, the encryption procedure E can be made public • Any part can send a message, while only one knows how to decrypt it Complexity of Algorithms

  24. Public-Key Cryptosystems • If the fourth property holds, then the mapping is one-to-one, and • The cryptosystem is a solution to the digital signature problem, i.e., • While creating a signature message M, Bob can apply his decryption procedure D • Any other party can then verify that Bob actually sent the message by applying the public encryption procedure E • Since only Bob knows the decryption procedure, only Bob can generate a signature message Complexity of Algorithms

  25. The RSA Cryptosystem • Probably the most well-known public-key cryptosystem is also one of the oldest, and is tied to the difficulty of factoring large numbers • It is named RSA after its inventors Rivest, Shamir and Adleman • In this cryptosystem we begin by selecting two large primesp and q Complexity of Algorithms

  26. The RSA Cryptosystem • Let n = p·q, and (n)= (p-1)·(q-1) • Encryption and decryption keys e and d are selected so that • e and (n) are relatively prime • e·d  1 ( mod (n) ) • The pair of values n and e forms the public key, while d is a private key Complexity of Algorithms

  27. RSA encryption/decryption Complexity of Algorithms

  28. RSA for Digital Signature Complexity of Algorithms

  29. The Fast Fourier Transform • A common bottleneck computation in many cryptographic systems is the multiplication of large integers and polynomials • The Fast Fourier Transform is a surprising and efficient procedure for multiplying such objects Complexity of Algorithms

  30. The Fast Fourier Transform • A polynomial represented in a coefficient form is described by a coefficient vector a= [a0, a1, …,an-1] as follows: • The degree of such a polynomial is the largest index of non-zero coefficient ai • A coefficient vector of length n can represent polynomials of degree  n-1 Complexity of Algorithms

  31. Multiplication of Polynomials • Multiplying two polynomials p(x) and q(x), as defined in coefficient form, is not straightforward • Consider p(x)·q(x), where q(x)=  bi·xi n-1 i=0 Complexity of Algorithms

  32. Convolution and FFT • The equation defines a vector c= [c0, c1, …,cn-1], which we call the convolution of the vectors a and b • For symmetry reasons, we view the convolution as a vector of size 2n, defining c2n-1 = 0 • We denote the convolution of a and b as a  b • If we apply the definition of the convolution directly, then it will take us time (n2) to multiply the two polynomials p and q • The Fast Fourier Transform (FFT) algorithm allows us to perform this multiplication in O(n log n) time Complexity of Algorithms

  33. The Interpolation Theorem • The improvement of the FFT is based on another way of representing a degree-(n-1) polynomial by its value on n distinct points Complexity of Algorithms

  34. Fast Fourier Transform • The Interpolation Theorem suggests an alternative representation as well multiplication method for polynomials • In order to find a multiplication of p(x) and q(x) • evaluate p and q for 2n different inputs • Compute the representation of the product of p and q as the set: • Such a computation takes O(n) time Complexity of Algorithms

  35. Primitive Roots of Unity • A number  is a primitive nth root of unity, for n > 1, if it satisfies the following properties: • n= 1, i.e., is an nth root of 1 • the numbers 1, , 2, …, n-1are distinct • Note that this definition implies that a primitive nth root of unity has a multiplicative inverse-1 = n-1, for -1= n-1= n= 1 Complexity of Algorithms

  36. Primitive Roots of Unity • The notion of a primitive nth root of unity has several important instances • One refers to the complex number e2in= cos(2/n) + i·sin(2/n) • Which is a primitive nth root of unity, when we take our arithmetic over the complex numbers, where i=-1 Complexity of Algorithms

  37. Properties of  • Reduction Property: if  is a primitive 2nth root of unity, then 2 is a primitive nth root of unity • Reflective Property: If  is a primitive nth root of unity and n is even, then n/2 = -1 Complexity of Algorithms

  38. Discrete Fourier Transform • Lets return to the problem of evaluating a polynomial defined by coefficient vector a as p(x)=  ai·xi for a carefully chosen set of input values • The Discrete Fourier Transform (DFT), is to evaluate p(x) at the nth roots of unity 1, , 2, …, n-1 n-1 i=0 Complexity of Algorithms

  39. Discrete Fourier Transform • Formally, the DFT for the polynomial p represented by the coefficient vector a is defined as a vector y of values yj= p(j), where  is a primitive nth root of unity, i.e., yj=  ai·i·j • In the language of matrices, we can think of vector y of yj values and the vector a as column vectors, and say that y= F a • Where F is a n x n matrix, s.t., F[i,j]= ij n-1 i=0 Complexity of Algorithms

  40. The Inverse DFT • Interestingly, the matrix F has an inverse, F-1, so that F-1(F(a))= a, for all a • The matrix F-1 allows us to define an inverse DFT • If we are given a vector y of the values of a degree-(n-1) polynomial p at the nth root of unity, then we can recover a coefficient vector for p by computing a= F-1y • Moreover, the matrix F-1 has a form F-1[i,j]= -ij/n • We can recover the coefficient ai as ai=yi·-i·j /n Complexity of Algorithms

  41. Computing Convolution Complexity of Algorithms

  42. Computing Convolution Complexity of Algorithms

  43. The Fast Fourier Transform Complexity of Algorithms

  44. The Fast Fourier Transform Complexity of Algorithms

  45. FFT analysis • The FFT algorithm follows the divide-and-conquer paradigm, dividing the original problem of size n into two sub-problems of size n/2, which are solved recursively • We assume that each arithmetic operation performed by algorithms takes O(1) time • The divide step as well as the combine step for merging the recursive solutions, each take O(n) time Complexity of Algorithms

  46. FFT analysis • Thus, we can characterise the running time T(n) of the FFT algorithm using the recurrence equation • T(n) = 2 T(n/2) + b·n, for a constant b > 0 • But we know that T(n) is O(n log n) Complexity of Algorithms

More Related