1 / 49

“ Stable Distributions, Pseudorandom Generators, Embeddings and Data Stream Computation ”

“ Stable Distributions, Pseudorandom Generators, Embeddings and Data Stream Computation ”. Paper by Piotr Indyk. Presentation by Andy Worms and Arik Chikvashvili. Abstract. Stable Distributions Stream computation by combining the use of stable distributions

Télécharger la présentation

“ Stable Distributions, Pseudorandom Generators, Embeddings and Data Stream Computation ”

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. “Stable Distributions, Pseudorandom Generators, Embeddings and Data Stream Computation” Paper by Piotr Indyk Presentation by Andy Worms and Arik Chikvashvili

  2. Abstract • Stable Distributions • Stream computation by combining the use of stable distributions • Pseudorandom Generators (PRG) and their use to reduce memory usage

  3. Introduction Input: n-dimensional point p with L1 norm given. Storage: in a sketch C(p) of O(logn/є) words Property: given C(p) and C(q) will estimate |p-q|1 for points p and q up to factor (1+є) w.h.p.

  4. Introduction.Stream computation Given a stream S of data, each chunk of data is of the form (i,a) where i[n]={0…n-1} and a{-M…M}. We want to approximate the quantity L1(S), where

  5. Other results • N. Alon, Y. Matias and M. Szegedy. Space O(1/є), w.h.p. for each i in the stream at most two pairs (i,a), approximating L2(S) [1996] • J. Feigenbaum, S. Kannan, M. Strauss and M. Viswanathan showed in L1(S) also for at most two pairs [1999]

  6. Stable Distributions Distribution D over R is p-stable if exists p≥0 s. t. for any a1…an real numbers and i. i. d. variables X1… Xn with distribution D the variable has the same distribution as the variable

  7. Some News • The good news: There exist stable distributions for (0, 2]. • The bad news: Most have no closed formula (i.e. non-constructive). • Cauchy Distribution is 1-stable. • Gauss Distribution is 2-stable. Source: V.M. Zolotarev “One-dimensional Stable Distributions”(518.2 ZOL in AAS)

  8. Reminder

  9. Tonight’s Program • “Obvious solution”. • Algorithm for p=1. • The algorithm’s limitations. • Proof of correctness. • Overcome the limitations. • Time permitting: p =2 and all p in (0, 2].

  10. The Obvious Solution • Hold a counter for each i, and update it on each pair found in the input stream. • Breaks the stream model: O(n) memory.

  11. The Problem (p=1) • Input: a stream S of pairs (i, a) such that 0  i  n and –M  a  M. • Output: • Up to an error factor of 1±ε • With probability 1-δ

  12. Definitions (c defined later) Are independent random variables with Cauchy distribution A set of Buckets, Sj 0  j  l, initially zero.

  13. The Algorithm • For each new pair (i,a): • Return median(|S0|, |S1|, …, |Sl-1|)

  14. Limitations • It assumes infinite precision arithmetic. • It randomly and repeatedly accesses random numbers.

  15. Buckets Data Stream Example: n = 7, l = 3 להמחשה בלבד (2,+1) (5,-2) (4,-1)

  16. Correctness Proof • We define: • Therefore: (ci = 0 if there is no (i,a) in S)

  17. Correctness proof (cont.) • Claim 1:Each Sj has the same distribution as CX for some random variable X with Cauchy Distribution. • Proof: follows from the 1-stability of Cauchy Distribution.

  18. Correctness Proof (cont.) • Lemma 1: If X has Cauchy Distribution, then median(|X|) = 1, median(a|X|) = a. • Proof: The distribution function of |X| is • Since tan(/4) = 1, F(1) = 1/2. • Thus median(|X|)=1 and median(a|X|)=a

  19. (Graphics Intermezzo)

  20. Correctness proof (cont.) • Fact: For any distribution D on R with distribution function F, take independent samples X0, …, Xl-1 of D, and let X = median(X0, …, Xl-1):

  21. Correctness Proof (cont.) • Fact in simple Hebrew: • You choose an error (small) and a probability (high). • With enough samples, you will discover the median with high probability within small error.

  22. Correctness Proof (cont.) • Lemma 2: Let F be the distribution of |X|, where X has Cauchy Distribution. And let z>0 be such that ½–ε  F(z) ½+ε. Then, if ε is small enough, 1-4ε  z  1+4ε • Proof:

  23. Correctness Proof (last) • Therefore we have proved: The algorithm correctly estimates L1(S) up to the factor (1±ε), with probability at least (1-δ).

  24. Correctness Proof (review) For those who are lost: • Each Bucket distributes like CX and median(|CX|)=C. • “Enough” samples approximate median(|CX|)=C, “well enough”. • Each bucket is a sample.

  25. Tonight’s Program • “Obvious solution”. • Algorithm for p=1. • The algorithm’s limitations. • Proof of correctness. • Overcome the limitations. • Time permitting: p =2 and all p in (0, 2]. • God Willing: Uses of the algorithm.

  26. Bounded Precision • The numbers of the stream are integers, the problem is with the random variables. • We will show it is sufficient to pick them from the set: (In Hebrew: the set of fractions of small numbers)

  27. Bounded Precision (cont.) • We want to generate X r.v. with Cauchy Distribution. • We choose Y uniformly from [0,1). • X = F-1(Y) = tan(Y/2). • Y is the multiple of 1/L closest to Y. • X is F-1(Y) rounded to a multiple of 1/L.

  28. Cauchy’s Lottery…

  29. Bounded Precision (cont.) • Assume Y < 1-K/L = 1-. • The derivative of F-1 near Y < 1- is O(1/2). • It follows that X=X+E, where |E|=O(1/2L)=.

  30. by making  small enough, you can ignore the contribution of Bounded Precision (cont.) (result from previous slide) (up to  & , from the algorithm’s proof)

  31. Memory Usage Reduction • The naïve implementation uses O(n) memory words to store the random matrix. • Couldn’t we generate the random matrix on the fly? • Yes, with a PRG. • We also toss less coins.

  32. Not just for fun. • From the Python programming language (www.python.org). source: http://www.python.org/doc/2.3.3/lib/module-random.html

  33. input A output random bits Review: Probabilistic Algorithms Allow algorithm A to: • Use random bits. • Make errors. Answers correctly with high probability. for every x, Prr[A(x,r)=P(x)]>1- ε. (for very small ε, say 10-1000).

  34. Exponential time Derandomization After 20 years of research we only have the following trivial theorem. Theorem: Probabilistic Poly-time algorithms can be simulated deterministically in exponential time. (Time 2poly(n)).

  35. 000000000000000000000100000000010 . . 11111111111 2r Proof: • Suppose that A uses r random bits. • Run A using all 2r choices for random bits. input A output random bits Take the Majority vote of outputs. Time: 2r·poly(n)

  36. 00000001 . 1111 2r r=O(log n) Polynomial time deterministic algorithm! Algorithms which use few bits Time: 2r·poly(n) Algorithms with few random coins can be efficiently derandomized! input A output random bits

  37. Derandomization paradigm • Given a probabilistic algorithm that uses many random bits. • Convert it into a probabilistic algorithm that uses few random bits. • Derandomize it by using the previous Theorem.

  38. input A output random bits input A output PRG seed pseudo-random bits few truly random bits many“pseudo-random” bits Pseudorandom Generators Use a short “seed” of very few truly random bits to generate a long string of pseudo-random bits. Pseudo-randomness: no efficient algorithm can distinguish truly random bits from pseudo-random bits.

  39. input PRG short seed pseudo-random bits few truly random bits Pseudo-Random Generators New probabilistic algorithm. A output In our algorithm we need to storage only short seed And not the whole set of pseudorandom bits

  40. -5 6 3 -5 -5 3 0 2 1 -3 2 0 1 5 1 1 1 0 -3 2 1 Remember? • There exist efficient “random access” (indexable) random number generator.

  41. PRG definition • Given FSM Q • Given a seed which is really random • Convert it into a k chunks of random bits each of length b. • Formally- G: {0,1}m({0,1}b)k • Let Q(x) be a state of Q after input x • G is PRG if |D[QxDbk(x)] - D[QxDm(G(x))]|1 ≤є

  42. PRG properties Exists PRG G for space(S) with є=2-O(S) such that: • G expands O(SlogR) bits into O(R) bits • G requires only O(S) bits of storage in addition to its random bits • Any length-O(S) chunk of G(x)n can be computed using O(logR) arithmetic operations on O(S)-bit words

  43. Randomness reduction • Consider a fixed Sj and O(log M) place to hold it • O(n) for Xi, (i,a) come by increasing order of i • So we need O(n) chunks of randomness • => exists PRG that needs random seed of size O(logMlog(n/δ)) to expand it to n pseudorandom variables X1…Xn

  44. Randomness reduction • X1…Xn variables give us Sj => L1(S) • But Sj does not depend on order of i-s, for each I the same Xi will be given => input can b unsorted • We use l=O(log(1/δ))/є random seeds

  45. Theorem 2 There is algorithm which estimates L1(S) up to a factor (1є) with probability 1-δ and uses (S=logM, R=n/δ) • O(logMlog(1/δ)/є) bits of random access storage • O(log(n/δ)) arithmetic operations per pair (i,a) • O(logMlog(n/δ)log(1/δ)/є) random bits

  46. Further Results • When p=2, the algorithm and analysis are the same, with Cauchy Distribution replaced by Gaussian. • For general p in (0, 2] don’t exist closed formulas for densities or distribution functions.

  47. General p • Fact: Can be generated p-stable random variables from two independent variables that are distributed uniformly over [0,1] (Chambers, Mallows and Stuck, 1976) • Seems that Lemma2 and the algorithm itself could work for this case also, but no need to solve them as there are not known applications with p that differs from 1 and 2.

  48. CX

More Related