1 / 40

Quantum Boltzmann Machine

Quantum Boltzmann Machine. Mohammad Amin D-Wave Systems Inc. Not the only use of QA. Maybe not the best use of QA. Adiabatic Quantum Computation. H ( t ) = ( 1- s ) H D + sH P , s = t/t f. energy levels. g min. Solution. Initial state. 1. s. 0. t f ~ (1/ g min ) 2.

reneeh
Télécharger la présentation

Quantum Boltzmann Machine

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Quantum Boltzmann Machine Mohammad Amin D-Wave Systems Inc.

  2. Not the only use of QA Maybe not the best use of QA

  3. Adiabatic Quantum Computation H(t) = (1-s)HD + sHP , s = t/tf energy levels gmin Solution Initial state 1 s 0 tf~ (1/gmin)2

  4. Thermal Noise System Bath Interaction energy levels kBT P0 1 s 0 Dynamical freeze-out

  5. Open quantum calculations of a 16 qubit random problem Classical energies

  6. Equilibration Can Cause Correlation Correlation with simulated annealing Hen et al., PRA 92, 042325 (2015)

  7. Equilibration Can Cause Correlation Correlation with Quantum Monte Carlo Boixo et al., Nature Phys. 10, 218 (2014)

  8. Equilibration Can Cause Correlation Correlation with spin vector Monte Carlo Shin et al., arXiv:1401.7087 SVMC SVMC

  9. Equilibration Can Mask Quantum Speedup Brooke et al., Science 284, 779 (1999) Quantum advantage is expected to be dynamical

  10. Equilibration Can Mask Quantum Speedup Ronnow et al., Science 345, 420 (2014) Hen et al., arXiv:1502.01663 King et al., arXiv:1502.02098 Equilibrated probability!!! Computation time is independent of dynamics!

  11. Residual Energy vs Annealing Time 50 random problems, 100 samples per problem per annealing time Bimodal (J=-1, +1 , h=0) Mean residual energy Lowest residual energy Annealing time (ms)

  12. Residual Energy vs Annealing Time 50 random problems, 100 samples per problem per annealing time Frustrated loops (a=0.25) Bimodal (J=-1, +1 , h=0) Annealing time (ms) Annealing time (ms)

  13. Boltzmann sampling is #P harder than NP What can we do with a QuantumBoltzmann Distribution?

  14. arXiv:1601.02036 Bohdan Kulchytskyy Roger Melko Jason Rolfe Evgeny Andriyash

  15. Machine Learning in our Daily Life

  16. Introduction to Machine Learning Data Model 3 Unseen data Model

  17. Probabilistic Models Data Probability distribution Variables Parameters q Model Training: Tune qsuch that

  18. Boltzmann Machine Data Variables Parameters q Model Boltzmann distribution (b =1)

  19. Boltzmann Machine Ising model: spins parameters

  20. Adding Hidden Variables z i zn za=(zn ,zi) visible visiblehidden hidden

  21. Training a BM Maximize log-likelihood: Or minimize: training rate gradient descent technique We need an efficient way to calculate Tune such that

  22. Calculating the Gradient Unclamped average Average with clamped visibles

  23. Training Ising Hamiltonian Parameters Clamped average Unclamped average Gradients can be estimated using sampling!

  24. Question: Is it possible to train a quantum Boltzmann machine? Transverse Ising Hamiltonian Ising Hamiltonian

  25. Transverse Ising Hamiltonian

  26. Quantum Boltzmann Distribution Boltzmann probability distribution: Density matrix: Identity matrix Projection operator

  27. Gradient Descent = Classically: = Clamped average Unclamped average

  28. Calculating the Gradient ≠ ≠ Clamped average Unclamped average Gradient cannot be estimated using sampling!

  29. Two Useful Properties of Trace Golden-Thompson inequality: For Hermitian matrices A and B

  30. Finding lower bounds Golden-Thompson inequality

  31. Finding lower bounds Golden-Thompson inequality Lower bound for log-likelihood

  32. Calculating the Gradients Minimize the upper bound ? Unclamped average

  33. Clamped Hamiltonian for Infinite energy penalty for states different from v Visible qubits are clamped to their classical values given by the data

  34. Estimating the Steps Clamped average Unclamped average We can now use sampling to estimate the steps

  35. Training the Transverse Field (Ga) Minimizing the upper bound: Two problems: cannot be estimated from measurements for all visible qubits, thus Gn cannot be trained using the bound

  36. Example: 10-Qubit QBM Graph: fully connected (K10), fully visible

  37. Example: 10-Qubit QBM Training set: M-modal distribution Random spin orientation Hamming distance Multi-mode: Single mode: M = 8 p = 0.9

  38. Exact Diagonalization Results KL-divergence: Bound gradient D=2 Classical BM Exact gradient (D is trained) D final= 2.5

  39. Sampling from D-Wave Dickson et al., Nat. Commun. 4, 1903 (2013) Probabilities cross at the anticrossing

  40. Conclusions: • A quantum annealer can provide fast samples of quantum Boltzmann distribution • QBM can be trained by sampling • QBM may learn some distributions better than classical BM • SeearXiv:1601.02036

More Related