1 / 13

Solving Hard Problems With Light

Solving Hard Problems With Light. vs. Scott Aaronson (Assoc. Prof., EECS) Joint work with Alex Arkhipov. In 1994, something big happened in the foundations of computer science, whose meaning is still debated today…. Why exactly was Shor’s algorithm important?

hhinojosa
Télécharger la présentation

Solving Hard Problems With Light

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Solving Hard Problems With Light vs Scott Aaronson (Assoc. Prof., EECS) Joint work with Alex Arkhipov

  2. In 1994, something big happened in the foundations of computer science, whose meaning is still debated today… Why exactly was Shor’s algorithm important? Boosters: Because it means we’ll build QCs! Skeptics: Because it means we won’t build QCs! Me: For reasons having nothing to do with building QCs!

  3. Shor’s algorithm was a hardness result for one of the central computational problems of modern science: Quantum Simulation Use of DoE supercomputers by area (from a talk by Alán Aspuru-Guzik) Shor’s Theorem: Quantum Simulation is not solvable efficiently (in polynomial time), unless Factoring is also

  4. Today, a different kind of hardness result for simulating quantum mechanics Advantages: Based on more “generic” complexity assumptions than the hardness of Factoring Gives evidence that QCs have capabilities outside the entire “polynomial hierarchy” Requires only a very simple kind of quantum computation: nonadaptive linear optics (testable before I’m dead?) Disadvantages: Applies to relational problems (problems with many possible outputs) or sampling problems, not decision problems Harder to convince a skeptic that your computer is indeed solving the relevant hard problem Less relevant for the NSA

  5. Bestiary of Complexity Classes P#P Counting Permanent BQP How complexity theorists say “such-and-such is damn unlikely”: “If such-and-such is true, then PH collapses to a finite level” PH xyz… NP BPP 3SAT Factoring P Example of a PH problem: “For all n-bit strings x, does there exist an n-bit string y such that for all n-bit strings z, (x,y,z) holds?” Just as they believe PNP, complexity theorists believe that PH is infinite So if you can show “such-and-such is true  PH collapses to a finite level,” it’s damn good evidence that such-and-such is false

  6. Our Results • Suppose the output distribution of any linear-optics circuit can be efficiently sampled by a classical algorithm. Then the polynomial hierarchy collapses. • Indeed, even if such a distribution can be sampled by a classical computer with an oracle for the polynomial hierarchy, still the polynomial hierarchy collapses. • Suppose two plausible conjectures are true: the permanent of a Gaussian random matrix is(1) #P-hard to approximate, and(2) not too concentrated around 0.Then the output distribution of a linear-optics circuit can’t even be approximately sampled efficiently classically, unless the polynomial hierarchy collapses. If our conjectures hold, then even a noisy linear-optics experiment can sample from a probability distribution that no classical computer can feasibly sample from

  7. Particle Physics In One Slide There are two basic types of particle in the universe… All I can say is, the bosons got the harder job BOSONS FERMIONS Their transition amplitudes are given respectively by…

  8. High-Level Idea Estimating a sum of exponentially many positive or negative numbers: #P-hard Estimating a sum of exponentially many nonnegative numbers: Still hard, but known to be in PH If quantum mechanics could be efficiently simulated classically, then these two problems would become equivalent—thereby placing #P in PH, and collapsing PH

  9. So why aren’t we done? Because real quantum experiments are subject to noise Would an efficient classical algorithm that simulated a noisy optics experiment still collapse the polynomial hierarchy? Main Result: Yes, assuming two plausible conjectures about permanents of random matrices (the “PCC” and the “PGC”) Particular experiment we have in mind: Take a system of n identical photons with m=O(n2) modes. Put each photon in a known mode, then apply a Haar-random mm unitary transformation U: Then measure which modes have 1 or more photon in them U

  10. The Permanent Concentration Conjecture (PCC) There exists a polynomial p such that for all n, Empirically true! Also, we can prove it with determinant in place of permanent

  11. The Permanent-of-Gaussians Conjecture (PGC) Let X be an nn matrix of independent, N(0,1) complex Gaussian entries. Then approximating Per(X) to within a 1/poly(n) multiplicative error, for a 1-1/poly(n) fraction of X, is a #P-hard problem.

  12. Experimental Prospects • What would it take to implement the requisite experiment? • Reliable phase-shifters and beamsplitters, to implement an arbitrary unitary on m photon modes • Reliable single-photon sources • Photodetector arrays that can reliably distinguish 0 vs. 1 photon • But crucially, no nonlinear optics or postselected measurements! Our Proposal: Concentrate on (say) n=20 photons and m=400 modes, so that classical simulation is nontrivial but not impossible

  13. Summary • I often say that Shor’s algorithm presented us with three choices. Either • The laws of physics are exponentially hard to simulate on any computer today, • Textbook quantum mechanics is false, or • Quantum computers are easy to simulate classically. For all intents and purposes?

More Related