1 / 27

The story of superconcentrators The missing link

The story of superconcentrators The missing link. Michal Ko u ck ý Institute of Mathematics, Prague. Computational complexity. How much computational resources do we need to compute various functions. ( time , space , etc.) Upper bounds (algorithms). Lower bounds. Lower bound techniques.

maselli
Télécharger la présentation

The story of superconcentrators The missing link

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The story of superconcentratorsThe missing link Michal Koucký Institute of Mathematics, Prague

  2. Computational complexity • How much computational resources do we need to compute various functions. (time, space, etc.) • Upper bounds (algorithms). • Lower bounds.

  3. Lower bound techniques • We have very little understanding of actual computation. • Diagonalization. • Gödel, Turing, … • Information theory. • Shannon, Kolmogorov, … • Other special techniques – random restrictions, approximation by polynomials. • Ajtai, Sipser, Razborov, …

  4. Integer Addition n+1 bits c=a+b b a n bits

  5. Circuits y1 y2 … yn-1 yn • gates are of arbitrary fan-in and may compute arbitrary Boolean functions. • size of circuit= number of wires. Output     depth d    xi x1 … … xm Input

  6. Circuits vs Turing machines polynomial size circuits ~ polynomial time computation Open: Exponential time computation cannot be simulated by polynomial size circuits.

  7. Integer Addition n+1 bits c=a+b 0000000000000000 b 000000000000000 a 000000000000000 n bits

  8. Integer Addition n+1 bits c=a+b 0100101110101100 b 000000000000000 a 100101110101100 n bits

  9. Integer Addition n+1 bits c=a+b 0010110100010110 b 000000000000000 a 010110100010110 n bits

  10. Integer Addition n+1 bits c=a+b 0100001110010000 b 011001110001111 a 001000000000001 n bits

  11. Integer Addition n+1 bits c=a+b 0100010000010000 b 011001110001111 a 001000010000001 n bits

  12. Integer Addition n+1 bits c=a+b 0011010000010000 a 011001110001111 b 000000010000001 n bits

  13. Connectivity property Y • For any two interleaving sets X and Y, where X are inputs a and Y are outputs c there are |X|=|Y| vertex disjoint paths between X and Y in any circuit computing integer addition. c=a+b b a X

  14. Superconcentrators [Valiant’75] Y • For any k, any X, and any Y, |X|=|Y|=k f(X,Y) =k  Can be built using O(n) wires. Oooopss! Out = f(X,Y) In X

  15. Relaxed superconcentrators [Dolev et al.’83] Y • For any k, random X, and random Y, |X|=|Y|=k EX,Y[f(X,Y)] ≥δk  Fixed depth requires superlinear number of wires! Out d = f(X,Y) In X

  16. Bounds on relaxed superconcetrators[Dolev, Dwork, Pippinger, and Wigderson ’83,Pudlák’92] depth d circuits size Ω(…) d=2 nlog n d=3 nlog log n d=2k or d=2k+1 nλk(n) where λ1(n) = log n and λk+1(n) = λk*(n) Applications [Chandra, Fortune, and Lipton ’83]

  17. Depth-1 circuits for Prefix-XOR → total size Θ(n2) Prefix-XOR: yk= x1x2 … xk-1xk y1 y2 … yn-1 yn      x1 … x2 xn

  18. Depth-2 circuits for Prefix-XOR y1 … yj … yn • Each middle block computes n/2i parities of input blocks of size 2i i=1, …, log n → the total size is O(n log n)    Output n n/2i 1 xi xn x1 … … Input

  19. Variants of superconcetrators For any k, sets X, Y where |X|=|Y|=k any X and any Yf(X,Y) = k (≥δk) superconcetrators any X and random Y EY[f(X,Y)] ≥δk middle ground random X and random Y EX,Y[f(X,Y)] ≥δk relaxed superconcetrators

  20. Comparison of depth-d superconcentrators d=2 size Θ(…) superconcentrators n (log n)2/log log n middle ground n (log n/log log n)2 relaxed superconcentratorsnlog n d=2k or d=2k+1 all variants nλk(n) where λ1(n) = log n and λk+1(n) = λk*(n)

  21. Good error-correcting codes 0<ρ,δ<1 constants, m < n: enc : {0,1}m → {0,1}n • For any x, x’ {0,1}m, where x  x’distHam(enc(x),enc(x’)) ≥ δn. • m ≥ ρn. Applications: zillions

  22. Connectivity of circuits computing codes Y • For any k, any X, and randomly chosen Y, |X|=|Y|=k EY[f(X,Y)] ≥δk [Gál, Hansen, K., Pudlák, Viola ‘12] Out = f(X,Y) In X

  23. Comparison of depth-d superconcentrators d=2 size Θ(…) superconcentrators n (log n)2/log log n middle ground n (log n/log log n)2 relaxed superconcentratorsnlog n d=2k or d=2k+1 all variants nλk(n) where λ1(n) = log n and λk+1(n) = λk*(n)

  24. Single output functions (c*ac*b)*c* [K. Pudlák, and Thérien ’05]  circuits must contain relaxed superconcentrators X y

  25. Recent improvements Explicit functions (matrix multiplication) [ Cherukhin ‘08, Jukna ’10, Drucker ‘12] depth d circuits size Ω(…) d=2 n3/2 d=3 nlog n d=4 nlog log n d=2k+1 or d=2k+2 nλk(n) where λ1(n) = log n and λk+1(n) = λk*(n)

  26. Conclusions • Information theory is the strongest lower bound tool we currently have (unfortunately).

More Related