1 / 23

Shengyu Zhang The Chinese University of Hong Kong

On the power of lower bound techniques for 1-way quantum communication complexity. Shengyu Zhang The Chinese University of Hong Kong. Algorithms. Circuit lb. Streaming Algorithms. Info. theory. Quantum Computing. Communication Complexity.

jadyn
Télécharger la présentation

Shengyu Zhang The Chinese University of Hong Kong

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On the power of lower bound techniques for 1-way quantum communication complexity Shengyu Zhang The Chinese University of Hong Kong

  2. Algorithms Circuit lb Streaming Algorithms Info. theory Quantum Computing Communication Complexity crypto VLSI Data Structures games … … Question: What’s the largest gap between classical and quantum communication complexities?

  3. Communication complexity • [Yao79] Two parties, Alice and Bob, jointly compute a function f(x,y) with x known only to Alice and y only to Bob. • Communication complexity: how many bits are needed to be exchanged? x y Alice Bob f(x,y) f(x,y)

  4. Various protocols • Deterministic: D(f) • Randomized: R(f) • A bounded error probability is allowed. • Private or public coins? Differ by ±O(log n). • Quantum: Q(f) • A bounded error probability is allowed. • Assumption: No shared Entanglement. (Does it help? Open.)

  5. Communication complexity: one-way model x y • One-way: Alice sends a message to Bob. --- D1(f), R1(f), Q1(f) Alice Bob f(x,y)

  6. About one-way model • Power: • Efficient protocols for specific functions such as Equality, Hamming Distance, and in general, all symmetric XOR functions. • Applications: • Lower bound for space complexity of streaming algorithms. • Lower bound? Can be quite hard, especially for quantum. As efficient as the best two-way protocol.

  7. Question • Question: What’s the largest gap between classical and quantum communication complexities? • Partial functions, relations: exponential. • Total functions, two-way: • Largest gap: Q(Disj) = Θ(√n), R(Disj) = Θ(n). • Best bound: R(f) = exp(Q(f)). • Conjecture: R(f) = poly(Q(f)).

  8. Question • Question: What’s the largest gap between classical and quantum communication complexities? • Partial functions, relations: exponential. • Total functions, one-way: • Largest gap: R1(EQ) = 2∙Q1(EQ), • Best bound: R1(f) = exp(Q1(f)). • Conjecture: R1(f) = poly(Q1(f)), • or even R1(f) = O(Q1(f)).

  9. Approaches • Approach 1: Directly simulate a quantum protocol by classical one. • [Aaronson] R1(f) = O(m∙Q1(f)). • Approach 2: L(f) ≤Q1(f) ≤R1(f) ≤ poly(L(f)). • [Nayak99; Jain, Z.’09] R1(f) = O(Iμ∙VC(f)), where Iμ is the mutual info of any hard distribution μ. • Note: For the approach 2 to be possibly succeed, the quantum lower bound L(f) has to be polynomially tight for Q1(f).

  10. Main result • There are three lower bound techniques known for Q1(f). • Nayak’99: Partition Tree • Aaronson’05: Trace Distance • The two-way complexity Q(f) • [Thm] All of these lower bounds can be arbitrarily weak. • Actually, random functions have Q(f) = Ω(n), but the first two lower bounds only give O(1).

  11. Next • Closer look at the Partition Tree bound. • Compare Q and Partition Tree (PT) and Trace Distance (TD) bounds.

  12. Nayak’s info. theo. argument Index(x,i) = xi • [Nayak’99] Q1(Index) = Ω(n). • ρx contains Ω(1) info of x1, since i may be 1. • Regardless of x1, ρx contains Ω(1) info of x2. • And so on. x{0,1}n i[n] Alice Bob ρx

  13. Nayak’s info. theo. argument Index(x,i) = xi • ρ = ∑x px∙ρx • S(ρ) = S(½ρ0+½ρ1) // ρb = 2 ∑x:x_1=b px∙ρx ≥ I(X1,M1) + ½S(ρ0)+½S(ρ1) // Holevo bound. M1: Bob’s conclusion about X1 ≥ 1 – H(ε) + ½S(ρ0)+½S(ρ1)// Fano’s Inequ. ≥ … ≥ n(1 – H(ε)). x{0,1}n i[n] Alice Bob ρx

  14. Partition tree Index(x,i) = xi • ρ = ∑x px∙ρx • ρb = 2 ∑x:x1=b px∙ρx • ρb1b2 = 4 ∑x:x1=b1,x2=b2 px∙ρx x{0,1}n i[n] Alice Bob ρx 000 001 010 011 100 101 110 111 ρ00 ρ0 ρ01 ρ ρ10 ρ1 ρ11

  15. Partition tree Index(x,i) = xi • ρ = ∑x px∙ρx • In general: • Distri. p on {0,1}n • Partition tree for {0,1}n • Gain H(δ)-H(ε) at v • v is partitioned by (δ,1-δ) x{0,1}n i[n] Alice Bob ρx 0 0 0 0 0 1 1 1 1 0 0 1 1 1 0 0 1 1

  16. Issue • [Fano’s inequality] I(X;Y) ≥ H(X) – H(ε). • X,Y over {0,1}. • ε = Pr[X ≠ Y]. • What if H(δ) < H(ε)? • Idea 1: use success amplification to decrease ε to ε*. • Idea 2: give up those vertices v with small H(X). • Bound: maxT,p,ε* log(1/ε*)∙∑vp(v)[H(Xv)-H(ε*)]+ • Question: How to calculate this? H(δ)

  17. Picture clear • maxT,p,ε* log(1/ε*)∙∑vp(v)[H(Xv)-H(ε*)]+ • Very complicated. Compare to Index where the tree is completely binary and each H(δv) = 1 (i.e. δv=1/2). • [Thm] the maximization is achieved by a complete binary tree with δv=1/2 everywhere.

  18. Two interesting comparisons • Comparison to decision tree: • Decision tree complexity: make the longest path short • Here: make the shortest path long. • Comparison to VC-dim lower bound: [Thm] The value is exactly the extensive equivalence query complexity. • A measure in learning theory. • Strengthen the VC-dim lower bound by Nayak.

  19. Trace distance bound • [Aaronson’05] • μ is a distri on 1-inputs • D1: (x, y) ←μ. • D2: y ←μ, x1, x2←μy. Then Q1(f) = Ω(log ∥D2-D12∥1-1)

  20. Separation • [Thm] Take a random graph G(N,p) with ω(log4N/N) ≤ p ≤ 1-Ω(1). Its adjacency matrix, as a bi-variate function f, has the following w.p. 1-o(1) Q(f) = Ω(log(pN)). • Q*(f) ≥Q(f) = Ω(log(1/disc(f))). • disc(f) is related to σ2(D-1/2AD-1/2), which can be bounded by O(1/√pN) for a random graph.

  21. [Thm] For p = N-Ω(1), PT(f) = O(1) w.h.p. • By our characterization, it’s enough to consider complete binary tree. • For p = N-Ω(1), each layer of tree shrinks the #1’s by a factor of p. • pN → p2N →p3N →… →0: Only O(1) steps. • [Thm] For p = o(N-6/7), TD(f) = O(1) w.h.p. • Quite technical, omitted here.

  22. Putting together • [Thm] Take a random graph G(N,p) with ω(log4N/N) ≤ p ≤ 1-Ω(1). Its adjacency matrix, as a bi-variate function f, has the following w.p. 1-o(1) Q(f) = Ω(log(pN)). • [Thm] For p = o(N-6/7), TD(f) = O(1) w.h.p. • [Thm] For p = N-Ω(1), PT(f) = O(1) w.h.p. • Taking p between ω(log4N/N) and o(N-6/7) gives the separation.

  23. Discussions • Negative results on the tightness of known quantum lower bound methods. • Calls for new method. • Somehow combine the advantages of these methods? • Hope the paper shed some light on this by identifying their weakness.

More Related