1 / 23

BosonSampling

This seminar discusses the concept of Boson Sampling, a rudimentary subset of quantum computing that involves non-interacting bosons. It explores the implications and potential computational power of using bosons to calculate the permanent and sample hard distributions. The seminar also highlights the difficulty in classically simulating the boson distribution and presents conjectures regarding the hardness of certain problems. Presented by Scott Aaronson from the University of Texas, Austin.

blacks
Télécharger la présentation

BosonSampling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. BosonSampling Scott Aaronson (University of Texas, Austin) AMO Seminar, Weizmann Institute, January 9, 2017 Based mostly on joint work with Alex Arkhipov

  2. The Extended Church-Turing Thesis (ECT) Everything feasibly computable in the physical world is feasibly computable by a Turing machine Shor’s Theorem:Quantum Simulation has no efficient classical algorithm, unless Factoring does also

  3. So the ECT is false … what more evidence could anyone want? • Building a QC able to factor large numbers is hard! After 24 years, no fundamental obstacle has been found, but who knows? • Can’t we “meet the physicists halfway,” and show computational hardness for quantum systems closer to what they actually work with now? • Factoring might be have a fast classical algorithm! At any rate, it’s an extremely “special” problem • Wouldn’t it be great to show that if, quantum computers can be simulated classically, then (say) P=NP?

  4. Our Starting Point BOSONS FERMIONS

  5. Complexity in One Slide The permanent [Valiant 1979] P#P P with Oracle for Counting Problems PHConstant Number of NP Quantifiers NPEfficiently Checkable Problems BQP“Quantum P” P(=BPP?)Efficiently Solvable Problems The determinant

  6. So if n-boson amplitudes correspond to permanents… Can We Use Bosons to Calculate the Permanent? That sounds way too good to be true—it would let us solve NP-complete problems and more using QC! Explanation: Amplitudes aren’t directly observable. To get a reasonable estimate of Per(A), you might need to repeat the experiment exponentially many times

  7. [A.-Arkhipov 2011]: Even so, that amplitudes are permanents lets usUse Bosons to Sample Hard Distributions(e.g., a random matrix A with probability weighted by |Per(A)|2) Basic Result: Suppose there were a polynomial-time classical randomized algorithm that took as input a description of a noninteracting-boson experiment, and that output a sample from the correct final distribution over n-boson states. Then P#P=BPPNP and the polynomial hierarchy collapses. Motivation: Compared to (say) Shor’s algorithm, we get “stronger” evidence that a “weaker” system can do interesting quantum computations

  8. The Quantum Optics Model Classical counterpart: Galton’s Board, on display at many science museums Using only pegs and non-interacting balls, you probably can’t build a universal computer—but you can do some interesting computations, like generating the binomial distribution! A rudimentary subset of quantum computing, involving only non-interacting bosons, and not based on qubits

  9. The Quantum Version Then we see strange things like the Hong-Ou-Mandel dip The two photons are now correlated, even though they never interacted! Let’s replace the balls by identical single photons, and the pegs by beamsplitters Explanation involves destructive interference of amplitudes:Final amplitude of non-collision is

  10. Getting Formal The basis states have the form |S=|s1,…,sm, where si is the number of photons in the ith “mode” We’ll never create or destroy photons. So s1+…+sm=n is constant. U For us, m=nO(1) Initial state: |I=|1,…,1,0,……,0 

  11. You get to apply any mm unitary matrix U—say, using a collection of 2-mode beamsplitters In general, there are ways to distribute n identical photons into m modes U induces an MM unitary (U) on the n-photon states as follows: Here US,T is an nn submatrix of U (possibly with repeated rows and columns), obtained by taking si copies of the ith row of U and tj copies of the jth column for all i,j

  12. OK, so why is it hard to sample the distribution over photon numbers classically? Given any matrix ACnn, we can construct an mm unitary U (where m2n) as follows: Suppose we start with |I=|1,…,1,0,…,0 (one photon in each of the first n modes), apply U, and measure. Then the probability of observing |I again is

  13. Claim 1: p is #P-complete to estimate (up to a constant factor) Idea: Valiant proved that the Permanent is #P-complete. Can use a classical reduction to go from a multiplicative approximation of |Per(A)|2 to Per(A) itself. Claim 2: Suppose we had a fast classical algorithm for boson sampling. Then we could estimate p in BPPNP Idea: Let M be our classical sampling algorithm, and let r be its randomness. Use approximate counting to estimate Conclusion: Suppose we had a fast classical algorithm for BosonSampling. Then P#P=BPPNP.

  14. The Elephant in the Room The previous result hinged on the difficulty of estimating a single, exponentially-small probabilityp—but what about noise and error? The “right” question: can a classical computer efficiently sample a distribution with 1/nO(1) variation distance from the boson distribution? Our Main Result: Suppose it can. Then there’s a BPPNP algorithm to estimate |Per(A)|2, with high probability over a Gaussian matrix

  15. Our Main Conjecture Estimating |Per(A)|2, with high probability over i.i.d. Gaussian A, is a #P-hard problem If this conjecture holds, then even a noisy n-photon experiment could falsify the Extended Church Thesis, assuming P#PBPPNP! Much of our work was devoted to giving evidence for this conjecture What makes the Gaussian ensemble special? Theorem: It arises by considering sufficiently small submatrices of Haar-random unitary matrices.

  16. Recent BreakthroughEldar and Mehraban 2017 npolylog(n)-time algorithm, using complex analysis methods, to approximate the permanent of an nn matrix A of i.i.d. Gaussian entries with variance 1 and mean 1/polylog(n), with high probability over A If this could be extended to the mean-0 case (or even mean 1/poly(n)), would falsify our main conjecture

  17. BosonSampling Experiments Initial experiments with 3 photons (groups in Rome, Oxford, Vienna, and Brisbane) Carolan et al. 2015: With 6 photons, but initial states of the form |3,3 Wang et al. 2017: With 5 photons, initial states of the form |1,1,1,1,1

  18. Challenges for Scaling Up: • Reliable single-photon sources (optical multiplexing?) • Minimizing losses • Getting high probability of n-photon coincidence • Goal (in our view): Scale to 30-50 photons • Don’t want to scale much beyond that—both because • you probably can’t without fault-tolerance, and • a classical computer probably couldn’t even verify the results!

  19. Scattershot BosonSampling Idea, proposed by Steve Kolthammer and others, for sampling a hard distribution even with highly unreliable (but heralded) photon sources, like SPDCs The idea: Say you have 100 sources, of which only 10 (on average) generate a photon. Then just detect which sources succeed, and use those to define your BosonSampling instance! Complexity analysis turns out to go through essentially without change

  20. Using Quantum Optics to Prove that the Permanent is #P-Complete[A.2011] Valiant showed that the permanent is #P-complete—but his proof required strange, custom-made gadgets • We gave a new, arguably more transparent proof by combining three facts: • n-photon amplitudes correspond to nn permanents • (2) Postselected quantum optics can simulate universal quantum computation [Knill-Laflamme-Milburn 2001] • (3) Quantum computations can encode #P-complete quantities in their amplitudes

  21. Are There BosonSampling Instances Whose Answers Are Easily Verified? Idea: What if we could “smuggle” a matrix A with huge permanent, as a submatrix of a larger unitary matrix U? Finding A could be hard classically, but shooting photons into an interferometer network would easily reveal it Pessimistic Conjecture: If U is unitary and |Per(U)|1/nO(1), then U is “close” to a permuted diagonal matrix—so it “sticks out like a sore thumb” A.-Nguyen 2014, Berkowitz-Devlin 2016: The pessimistic conjecture holds

  22. BosonSampling with Lost Photons Suppose we have n photons in the initial state, but k are randomly lost. Then the probability of each output has the form A.-Brod 2016: For any constant number of losses, k=O(1), the above quantities are #P-hard to approximate, assuming |Per(A)|2 itself is. We don’t know what happens for k between O(1) and n-O(n)…

  23. Summary Intuition suggests that not merely quantum computers, but many natural quantum systems, should be intractable to simulate on classical computers, because of the exponentiality of the wavefunction BosonSampling provides a clear example of how we can formalize this intuition—or at least, base it on “standard” conjectures in theoretical computer science. It’s also brought QC theory into closer contact with experiment. And it’s highlighted the remarkable connection between bosons and the permanent. Future progress may depend on solving hard open problems about the permanent

More Related