1 / 22

Scott Aaronson ( UT Austin ) A farm somewhere outside Santa Barbara, CA May 16, 2019

This talk by Scott Aaronson explores the various aspects of certified randomness in quantum supremacy, including its applications in machine learning, finance, and optimization. It also delves into the potential risks and challenges associated with generating random bits using a NISQ quantum computer remotely. Additionally, the talk discusses possible use cases and the role of challenges in generating random bits. The talk concludes with a discussion on the changing landscape and recent developments in the field.

mixon
Télécharger la présentation

Scott Aaronson ( UT Austin ) A farm somewhere outside Santa Barbara, CA May 16, 2019

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Aspects of Certified Randomness from Quantum Supremacy 1101000011010011110110110011001100010100100110100011111011110100 Scott Aaronson (UT Austin) A farm somewhere outside Santa Barbara, CAMay 16, 2019

  2. Three Views of Quantum Supremacy (2) We need to pick real applications—ML, finance, optimization– then find big quantum speedups for them (1) The “useful application” is to refute the QC skeptics! (3) Let’s start with what would refute the QC skeptics, and then see whether it’s useful for anything Risk of Ewinization

  3. The Randomness Protocol“Born from complexity theory. Somehow became first planned application for Bristlecone / Sycamore…” SEED CHALLENGES Goal: By interacting with a NISQ QC remotely, force it to generate fresh random bits, which no one (not even the QC) knew beforehand. Place no trust in the QC! “Proof of Sampling.” Modest quantum speedups, not for their own sake, but as proof of some other property

  4. Possible Sycamore Use Case Public, trusted random bits—e.g. for proof-of-stake cryptocurrencies, such as future-Ethereum Problem:Who picks the challenges to send Sycamore—and why should everyone else trust the challenger? Ideas: Challenges could come from a distributed cryptographic protocol involving a pseudorandom function, a blockchain, the NYSE, the weather… If the challenges are unpredictable, then why not just use them to generate the random bits, and skip the QC part? One answer: “forward secrecy”

  5. Other Possible Use Cases Secret random bits … if you own the QC yourself! Or, as long as you trust Google to try to generate your keys randomly, Google could use my protocol to convince you that its RNG hardware isn’t malfunctioning or backdoored by someone else Or, you could give Google a tamper-resistant “Hardware Security Module” that acts as the challenger for you With thanks to Jimmy Chau

  6. I spoke on exactly the same topic at this conference exactly a year ago. What’s changed since then? Mostly, I now have time to explain the parts that I skipped last year…

  7. So what was I doing all year? Gentle measurement and differential privacy (A.-Rothblum, arXiv:1904.08747) New QFT-free quantum approximate counting algorithm Quantum lower bounds for approximate counting using Laurent polynomials (AKKT, arXiv:1904.08914) Oracle relative to which NPBQP but PH is infinite Teaching, faculty recruiting, travel, kids…

  8. The Protocol 1. The classical client generates n-qubit quantum circuits C1,…,CT pseudorandomly (mimicking a random ensemble) 2. For each t, the client sends Ct to the server, then demands a response St within a very short time In the “honest” case, the response is a list of k samples from the output distribution of Ct|0n 3. The client picks a few random iterations t, and for each one, applies a “HOG” (Heavy Output Generation) test 4. If the tests pass, then the client feeds S=S1,…,ST into a classical randomness extractor, such as GUV (Guruswami-Umans-Vadhan), to get nearly pure random bits

  9. The HOG Test(Similar in spirit to Google’s cross-entropy test) Given an n-qubit quantum circuit C, parameters b(1,2) and k{1,2,…}, and purported distinct samples s1,…,sk from C’s output distribution, HOGb,k(C) checks whether Note: To perform the HOG test, we need a brute-force (~2n-time) classical computation! If n=53 this is doable…

  10. Porter-Thomas Speckle Behavior For large k and s1,…,sk sampled by an ideal QC, “Just like the Bell inequality” For s1,…,sk sampled uniformly at random,

  11. Proving Porter-Thomasness! Unfortunately, not yet for “natural” ensembles of random quantum circuits, but for the “Idaho ensemble” “Re-randomization”: a Haar-random unitary on the leftmost O(log(n)) qubits Random quantum circuit of size ~n23—proven to be an approximate t-design (t>>n2) by Brandão, Harrow, Horodecki 2012 |0|0|0|0|0 n qubits

  12. Why must there be any entropy in the server’s responses? Suppose to the contrary that S=f(C), for some determinstic function fBQP, with S=s1,…,sk typically passing HOG. We claim this would imply strange computational powers… Long List Quantum Supremacy Verification (LLQSV) You’re given a giant list of quantum circuits C1,…,CM, as well as strings s1,…,sM. You’re asked: were the si’s sampled uniformly at random, or was each si sampled from the output distribution of Ci? Seems hard … even for a QC!

  13. Key Claim: If a fast QC could pass HOG deterministically, then there would be a QCAM protocol for LLQSV QCAM: Arthur-Merlin where Arthur can send a classical random challenge to Merlin, get back a classical response, then verify it quantumly Proof: The whole problem boils down to approximate counting—how many i’s are there in our long list such that f(Ci)=si? C1 C2 C3 C4 C5 C6 C7 C8 s1 s2 s3 s4 s5 s6 s7 s8 Arthur sends Merlin a random hash function g, then Merlin sends back an i such that f(Ci)=si and g(i)=0…0 Note: If the Ci’s were classical, there would be an AM protocol!

  14. Min-Entropy More relevant than Shannon entropy for randomness extraction protocols Typically enough to be -close in variation distance to a distribution with high min-entropy

  15. Boosting from (1) to (n) bits of min-entropy per round Claim: Suppose that a BQP machine M samples S=s1,…,sk from a probability distribution that has: - Mass  q on samples that pass HOGb,k(C)- Mass  p on samples with probability  2-H And suppose b(p+q-1) >> p. Then there’s an AM protocol for LLQSV with a quantum verifier running in ~2H/2 time (and a small advice string) Intuition: Merlin can point Arthur to Ci’s for which M outputs si with probability  2-H. Then Arthur can use Grover search, taking ~2H/2 time, to check Merlin’s claims

  16. Parameterology with Noisy QCs Guaranteed min-entropy per round: b = HOG threshold parameter q = QC’s success probability on HOGb,k L = log(assumed LLQSV hardness) n/2 Only positive when bq > 1 Danger: Noisier hardware  lower q  need larger b to get bq>1  q decreases even more, etc. Escape route: Set b1+ where  is the QC’s fidelity, and k1/2 where k is the number of samples per round. Then q1 and bq>1 by the Law of Large Numbers

  17. Fernando Brandão’s calculation Assuming best current simulation algorithms, min-entropy per round is approximately Plugging in target Sycamore fidelity of 510-3, we could get  10 certified random bits from a round with k  25 million samples. This would take a few seconds. Bit rate improves enormously with better fidelity…

  18. Generating the challenge circuits Suppose the client generates the challenges C1,C2,… using a pseudorandom function (PRF). Then obviously, we’ll want a PRF secure against quantum attack But more than that: we need to show that if our protocol fails (and if LLQSV is hard), then the PRF can be broken Problem: Our protocol “failing” could be extremely hard to notice—it just looks like the distribution over output bits being far from uniform Solution: Assume a PRF secure against the class QSZK (Quantum Statistical Zero Knowledge). There are many plausible candidates!

  19. Randomness Accumulation Most involved part of our analysis: show that, if the protocol continues for T rounds, then the client gets (Tn) total bits of min-entropy What if each round produced (n) random bits individually, but they were maliciously correlated, reusing the same entropy? Challenge: Show that, if there are many rounds where little randomness gets produced, conditioned on the outcomes of the previous rounds, then we can exploit that to get an AM protocol for LLQSV Solution idea: Simulate the protocol up to a random round. Then plug the state of the QC at that round (described using quantum advice) into the LLQSV AM protocol

  20. The Bushy Path Lemma(with thanks to Ron Peled) Consider a tree where each edge is either red or blue. Suppose that on almost every path from the root to a leaf, at least 90% of the edges are red Let a “bushy path” be a path together with all edges incident to it. Then on almost every bushy path, at least 89% of the edges are red

  21. Randomness with a random oracle In the random oracle model, our randomness protocol can be made provably, unconditionally secure Admittedly, it doesn’t sound impressive to produce random bits in a world with a random oracle! But remember that these have to be fresh random bits—unpredictable even given the random oracle! In the random oracle model, both of our assumptions—the one about LLQSV and the one about the PRF—become theorems, which follow from the BBBV Theorem (the optimality of Grover’s algorithm)

  22. Open Problems Can we get polynomial-time classical verification and NISQ implementability at the same time? Can we get more and more certified randomness by sampling with the same circuit C over and over? Would make the protocol much more efficient and practical Can we prove our scheme sound under less boutique complexity assumptions? Can we prove our scheme sound even against adversaries that are entangled with the QC? Are there better cryptographic solutions to the problem of “who trusts the challenger”?

More Related