1 / 38

Derandomizing LOGSPACE

Derandomizing LOGSPACE. Based on a paper by Russell Impagliazo, Noam Nissan and Avi Wigderson Presented by Amir Rosenfeld. Derandomization & Pseudorandomness. Pseudorandomness is about understanding the minimum amount of randomness actually required by a probabilistic model of computation.

melosa
Télécharger la présentation

Derandomizing LOGSPACE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Derandomizing LOGSPACE Based on a paper by Russell Impagliazo, Noam Nissan and Avi Wigderson Presented by Amir Rosenfeld

  2. Derandomization & Pseudorandomness • Pseudorandomness is about understanding the minimum amount of randomness actually required by a probabilistic model of computation. • A Pseudorandom Generator takes m<<n random bits and deterministically stretches them into n pseudorandom bits. • A Pseudorandom Generator is said to “fool” a computational model, when n truly random bits used by the model can be replaced by n pseudorandom bits created by the generator without significant difference in its behavior. • If we can “fool” a probabilistic algorithm by using m<<n truly random bits, then we can derandomize it by trying out all 2m possibilities.

  3. Relevant Computational Models • The generator described by INW fools every computational model which can be described as a network of probabilistic processors. • The number of truly random bits required by the generator (m) depends on the the communication bandwidth of the algorithm run by the network. • No assumptions are made about the computational power of the processors.

  4. Communication Network Each processor calculates some function of its input, random bits and received information. v1 v2 v3 v4 v5 Each processor sends and receives at most c bits of information Each processor requires a preset number of random coin flips Each processor receives part of the input.

  5. Some Intuition • In a network algorithm that uses probabilistic processors, we can reuse the same random bits on many processors provided their communication is limited enough such that a processor cannot learn much about the random bits of the others. r random bits are required by both Alice and Bob The entropy of Alice's bits as seen here is r-c So Bob can reuse most of the entropy in the bits given to Alice Alice Bob c bits of communication

  6. The Basic Two-Party Model • Yao (1979) described a communication complexity model for computation where two parties are required to compute a function • The communication complexity was described as the minimum number of bits that need to be communicated between the parties until one of them outputs the answer.

  7. Defining Protocol • The network algorithm uses a specific protocol to communicate between the parties. • We call a protocol “normal” if the total amount of information sent/received by a party equals the total number of bits that it sent/received. (I.e. length of messages and timing are known in advance) • A c-protocol is a normal protocol in which on any input to the network and every random choice, every party sends and receives at most c bits of communication.

  8. Definition of the Basic Generator

  9. The Basic Generator • Fix an expander graph H=(V,E), with 2r vertices and degree D=2d. • The input is a name of a random directed edge in E. It therefore requires m=r+d random bits. • The output are the two vertices on the edge. Thus it produces two r bit output strings.

  10. What does it Fool? Theorem 1 g is a c-generator, i.e. it fools 2-party c-protocols, for c = (d - log λ)/2 where λ is the second largest eigenvalue of H. Recall that g consists of an expander graph H with degree D=2d.

  11. Proof of Theorem 1 • For every graph H=(V,E) of degree D and second largest eigenvalue λ, and for every S,T  V the following inequality holds: This is the Mixing Lemma that was presented earlier in the course.

  12. Proof of Theorem 1 (cont.)

  13. Extractor – An Alternative View • The same construction could be achieved with an extractor: The auxiliary bits used by the extractor Seed = m random bits r d r E(r,d) Bounded communication – r still contains much entropy to other party

  14. Expanding the Model • The communication network is a graph H=(V,E) where the nodes are parties/processors and directional edges represent communication lines between them. • Each processor has unlimited power and it can use any input information and any communicated information it received. • We are concerned with networks algorithms using c-protocols.

  15. Partition Trees • A partition tree T of a graph H=(V,E) is a rooted binary tree with a one-to-one onto mapping of V to the leaves of T. • T is called balanced if the depth of T is O(log |V|). • Every internal node ν of T partitions V into 3 sets: Aν, Bν and Cν ; these are the vertices of V residing in the leaves of the left child of ν, right child of ν and the remaining leaves of T.

  16. A Partition Tree T1 T2 T4 T3 v1 v2 v3 v4 v5

  17. A Partition Tree T1 T2 T4 T3 v1 v2 v3 v4 v5

  18. Partition Trees (cont.) • cut(ν) is the subset of E which connect vertices in two different sets. • The width of ν is the smallest number of vertices which cover all of the edges in the cut. • The width of a tree is the minimal width of an internal node. • The width of a graph H is the the smallest width of a balanced partition tree for H.

  19. A Partition Tree T1 cut(T2) = 4 T2 T4 T3 v1 v2 v3 v4 v5

  20. k-measurement • A k-measurement M on a protocol is a k-bit function where each of the bits can be computed at the end of the protocol by a single processor (at least). The entire k bit measurement Processor 1 Processor 2 Processor n

  21. The Required Generator

  22. Constructing the Generator

  23. Constructing the Generator (2)

  24. The Generator (3) x (truly random seed) S R L v1 v2 v3 v4 v5

  25. What can the Generator Do? Main Theorem:

  26. Random LOGSPACE The class of problems that are decidable by a Turing machine with the following characteristics: Input Tape of length n Work Tape of length O(log n) Random Bits Tape

  27. Non Uniform Machine • The Turing machine just described is for a uniform model of computation. • We can build a non-uniform machine by creating a specific hardwired machine for every input. • This machine has the random bit tape as the only input.

  28. Non-Uniform Machine (2) • We are now left with an OBDD that has to Accept or Reject according to the random tape. The OBDD implies read-once access of the random bits, but this is only a simplification for the purpose of explanation. A Max Poly(n) Width resulting from LOGSPACE bound R Random Bit Tape

  29. The Communication Network Reducing the OBDD to a network and protocol: • Each random-bit cell is a processor in the network • Whenever the head moves from cell Ai to its neighbor Ai+1 , the entire state of the machine is sent from processor i to processor i+1.

  30. The Resulting Line Protocol • At each random cell transition a processor sends the system state to its neighbor. P1 P2 P3 Pr-1 Pr System State O(log n) bits

  31. Tree Width of the Line Protocol T1 Constant Width ! T1 T1 T1 P1 P2 P3 P4 P5

  32. What do we need to fool? • A processor sends the state at most constant number of times. Thus it is an O(S)-protocol. • The tree-width of the network is O(1) • The k-measurement is actually a 1-measurement: Accept or Reject by the last processor.

  33. Derandomizing LOGSPACE Bounded Read-Multiplicity Machines • The total state of the machine is held in O(log n) bits. • Therefore, our generator requires only: random bits, allowing us to derandomize the algorithm with nlog n input strings.

  34. Proof (for LOGSPACE Machines)

  35. Proof (cont.) • Induction base: • For a leaf – the distributions are identical. • For the induction step we create hybrid distributions and prove that their combined distance from the fully random distribution meets the goal.

  36. The Hybrid Distributions By induction hypothesis, and averaging over possible values of right side R R R R R R R R R G R R R G By induction hypothesis, and averaging over possible values of right side R R G G R R G G G G G G G G

  37. Summary • We showed a generator that can fool randomized network algorithms. • We showed a reduction of LOGSPACE machines to a Network Algorithm • We proved that the generator works for the networks that result from that reduction • This proves that we can derandomize LOGSPACE by order nlog n random bits.

  38. The END

More Related