1 / 38

QUANTUM COMPUTATION: THE TOPOLOGICAL APPROACH

QUANTUM COMPUTATION: THE TOPOLOGICAL APPROACH. Michael H. Freedman Theory Group MSR. Classical computation based on the idea of being able to write, erase, and read symbols. You also need to have a few internal states “moods” which dictate how you’ll react to what you just read.

tanika
Télécharger la présentation

QUANTUM COMPUTATION: THE TOPOLOGICAL APPROACH

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. QUANTUM COMPUTATION: THE TOPOLOGICAL APPROACH Michael H. Freedman Theory Group MSR

  2. Classical computation based on the idea of being able to write, erase, and read symbols. You also need to have a few internal states “moods” which dictate how you’ll react to what you just read. A Turing machine formalizes this concept. According to the thesis of Church-Turing all REALISTIC computer architectures should be about as efficient as each other (POLYNOMIALLY EQUIVALENT).

  3. The Turing machine lives at the heart of logic and philosophy (the undecidability of the Halting Problem). But, if you think about it, even a “UNIVERSAL” Turing machine - one capable of simulating any other - is rather a paltry thing. It is a victory for the plodders: as long as it can read, write, and not misplace its records, it can gradually accomplish almost anything! But perhaps very SLOWLY.

  4. But today the Church-Turing thesis is in doubt. If QUANTUM MECHANICS is correct, then the Church-Turing thesis is almost certainly wrong! (Another possibility is that certain problems like FACTORING which look hard on an ordinary computer are actually easy.)

  5. There is a new computational model “Quantum Computation”, which is based on the ability to “write”, “rotate”, and “read” quantum states. “The three R’s of the 21st century.” • In the quantum model you “write” states in a vector space; • Operate on them with “rotations” which are the analogs of classical “gates”; and • “Read” the rotated state by making an observation. What is actually observed is a frequency, say a flash of light, corresponding to an eigenvalue of the observable. Which eigenvalue is observed depends probabilistically on the rotated state vector.

  6. What all popular publications call “weird” is that the state vector, living in the physical Hilbert space, does not have to be part of some fixed standard basis (which may have classical meaning) but rather can be any linear combination or “superposition” of these basic vectors. This is only “weird” because we are big, clumsy things and cannot usually probe, with our unaided senses, the small scales at which superpositions are manifest. But the mathematics is incredibly simple and not in the least “weird”: it is just linear algebra. (Mathematically, quantum mechanics perfect and simple, and classical mechanics equally perfected and simple. What is difficult still is to describe exactly how the two are related!)

  7. It is not completely obvious that greater computational power should reside in the quantum world. On the positive side, you can prepare enormous superpositions of classical states and try to use them to do (exponentially) many things at once. But on the negative side, when you go to make your observation, you have to be prepared to listen to a cacophony of replies all chiming in at once. There is by now an art of harnessing the interference effects known from wave mechanics to cause the interesting bits of the “answer” to reinforce and the annoying and useless bits to cancel. (One should think back to college days and remember the double slit experiment.)

  8. But this is so far pretty vague. The simplest EXAMPLE, I know that suggests extraordinary computational power lurking in the quantum world is a simple protocol for querying a quantum black box with a single yes/no question and being able to learn which object among 4 (not 2 !!!) it is “thinking of.” While we ask one question, it is not: “Is the object A?”, but rather a uniform superposition of four questions: “is the object A,B,C,D?” all asked at once.

  9. , , , , , , , , , , , , , , , It is easier in symbols: according to the answer being A,B,C, or D.

  10. The Black Box has communicated the hidden object: A, B, C, or D by flipping the phase in the 1st, 2nd , 3rd, or 4th coordinate. The punch line is that the 4x4 matrix above is ORTHOGNONAL so we can find an observable which – with certainty – distinguishes the four possibilities with one observation. This is the smallest special case of the “Grover search algorithm.”

  11. The most famous CS problem which a quantum computer - if built - can quickly solve is factoring the number n (Shor). This problem seems to scale like classically and like in the quantum world. So it looks like this new computational model has considerably more power than the old.

  12. Can we put this in perspective? • Should we expect new computational models to emerge every few decades? I think not: Physics, like an onion, has its layers: the classical, the quantum mechanical, and then very far down perhaps a stringy quantum gravity…. No one, I think, will ever make a computer out of strings or black holes so the quantum computer may be the final information processing technology even if our race survives 10,000 years. It will define the boundary between what is knowable and what is not.

  13. What exactly can we expect as an immediate consequence of quantum computation if the technology can be created? With the fall of RSA and most/all other classical encryption schemes, there would be havoc, exciting havoc in cryptography – probably a net minus. Just the prospect of a future quantum computer has already spawned a field of “quantum security protocols” whose future developments looks secure.

  14. The Grover algorithm, that we have already met, speeds up mindless search (say on NP-complete problems) by a square-root. But I would not count on this for much: the quantum computer’s overhead could easily eat up this advantage (depending on details of implementation which we cannot yet know). So what is left?

  15. There is progress in quantum CS, the graph isomorphism problem seems to hang by a thread, but even if there were NO CS problems solved by a quantum computation we would have a feast before us. Feynman’s original motivation for proposing quantum computation was the inability of classical computers to carry out realistic simulations of the simplest quantum mechanical systems. Chemistry and material science (but I think not medicine) would be revolutionized. With a quantum computer we would soon find out how the cuprate high-temperature superconductors work, and if the family contains the Holy Grail: a room temperature superconductor.

  16. While general NP problems would not be solved, there is an interesting heuristic suggestion from an MIT group (Fahri, Goldstone, et. al. …): Create a correspondence between solutions to a problem and the ground state of a Hamiltonian; now take to an easy problem for which the Hamiltonian readily attains its ground state, then adiabatically deform to the hard problem hoping the system remains in its ground state. This and many other ideas could prove quite powerful but we will not know until be build a quantum computer – simulating the approach on a classical computer is exponentially inefficient.

  17. Farther down the road, it is difficult to restrain one’s imagination. A tinyflake of material one millimeter on a side and only a few angstroms thick could serve as the guts of a powerful quantum computer. Will we finally put ourselves “out of business” by making a much more capable entity?

  18. Why has AI not succeeded (an interesting question to which I have no answer) ? Without suggesting (ả la Penrose) that there is anything quantum mechanical about human intelligence it seems quite possible that quantum computers will be programmed to appear/be intelligent. Our ability to hold a lot in mind as “background” to decision making might be imitated though the use of superposition. (How this works in our - presumably classical - brains is something I would love to know!)

  19. Before we get too excited about what the NEW • WORLD will look like, • whether it contains interesting business opportunities – or • even a place for humans at all, • there is a major problem to be addressed: DECOHERENCE and the accumulation of ERRORS.

  20. Decoherence is why we do not observe cats half dead and half alive. It is the tendency of quantum systems to become classical. The ENVIRONMENT tends to reach into any quantum mechanical system and MEASURE it and reduce it to classical PROBABALISTIC combinations - rather than the more powerful quantum mechanical SUPERPOSITIONS. Decoherence will always be an issue but its significance will depend sensitively on the proposed architecture for the quantum computer. In particular it will depend on what “degrees of freedom” we compute with.

  21. A broad definition of quantum computation is any result you can eke out of an experiment on a quantum mechanical system. But the usual working definitions is the “qubit” or “quantum circuit” model. A qubit is a two dimensional vector Space spanned by “up” and “down”; a linearized bit, if you like: instead of a state being entirely up or entirely down it is possible that it is in a “superposition” up plus down where and are complex numbers. (“Two” is not actually important to the story – any finite number of states gives an equivalent theory.)

  22. After defining qubits, the world at large takes a bit of a wrong turn by taking them too seriously and imagining that they, directly, should be the guardians of quantum information. This is too naïve. It is imagined that if you need 10,000 qubits, well, then you should dope a silicone wafer with 10,000 phosphorus atoms and let their individual nuclear spins label the qubits. (Or 10,000 photon polarizations, or electronspins…etc. ). In the conventional approach, the qubit of information is a local (in space or in momentum space) quantum number. The operation of the quantum computer is then imagined to be a series of “gates” applied to the spins either individually or in pairs. Local degrees of freedom are fatally vulnerable to DECOHERENCE. The same design that made it easy for the programmer to reach into the system and gate it makes it easy for the environment to reach in and touch those same local degrees of freedom.

  23. This conundrum has been “mathematically” vanquished by the “fault tolerance” theorem which states that once a sufficient standard of accuracy and isolation is attained, there is a recursive strategy for correcting errors so that one can carry out indefinitely long computations. As a mathematician, I admire the theorem, but as a scientist, I regard it as nearly irrelevant since required initial accuracy (about five decimal places) is unrealistic. This is why NMR can factor 15 (with high probability) but will never threaten RSA. It will never sustain a long calculation.

  24. The Topological Model The topological approach amounts to physical rather than “software” error correction. Topology is the study of properties that are retained under deformation. A physical system is said to be in a “topological phase” if only a change of topology can evolve the system. Observables depend only coarsely on the trajectories of particles; they depend on winding numbers and their generalizations, but not on local detail.

  25. Some topology in physics is very familiar: if two identical fermions are exchanged, the state vector is multiplied by -1. The details of the exchange trajectory are irrelevant. In our world, with three spatial dimensions (Do not let the string theorists unsettle you about this point!), there is really only a single type of exchange – up to deformation. The two dimensional world is richer here: we have a clockwise and a counter clockwise exchange, each quite different from the other and both of infinite order in space time (2+1 dimensions). If many identical particles are exchanged repeatedly the general braid can be produced.

  26. Particle-antiparticle pairs are created out of the vacuum. birth braiding time death afterlife?

  27. Around 2001, my collaborators and I created such a two dimensional system (mathematically) and proved that it was a universal quantum computer. In the last three years we have moved these mathematical constructs to the brink of honest physical descriptions: electrons feeling chemical and Coulomb potentials and tunneling around in a two dimensional lattice. Several groups of chemists and physicists are responding to these models. * ** *F., M. Larson, Z. Wang ** F., C. Nayak, K. Shtengel

  28. Topological phases are not merely mathematical constructs. Laughlin won the 1999 Nobel Prize in physics for his (topological) description of Fractional Quantum Hall (FQH) fluids. These are correlated systems of electrons trapped in a two dimensional crystal interface within a semiconductor and subjected to a strong transverse magnetic field. In fact, many theorists believe that certain of the finer FQH plateaus are in states that we now know to be “universal quantum computers.” Unfortunately they are far too delicate (mK spectral gap) to be harnessed. One of the first applications of the FQH effect was a measurement of the fine structure constant to 9 decimal places: topological phases, once created, appear to be exact (corresponding to the mathematical fact that unitary representations of the braid group lie only in discrete sequences.)

  29. In broad terms the topological state of matter we intend to make are mathematically isomorphic to the operator algebras of these FQH systems. We have reasons tobelieve these ALGEBRAIC structures will be more stable. Before we would attempt to build a quantum computer we would manufacture first mathematically, and then materially, a little two-dimensional universe unto itself. It is impossible to overstate how astonishingly different the physical properties of this little world will be from any known matter. The scientific and technological possibilities are immense and I have no idea what most of them are. We know certain properties of these materials in complete detail (for mathematical reasons). These include the braiding and fusion algebra. There are theoretical reasons to believe that such systems could actually provide a route to high-temperature superconductivity. The complicated braiding properties of the quasiparticle excitations of these systems do not allow them to propagate easily. However, pairs or other aggregates of quasiparticles might be able to propagate more easily, and the formation of pairs contains the germ of superconductivity.

  30. If I were allowed a metaphor here: metals with their half-filled electronic bands are born conductors (they sit around waiting for a potential to be applied so that currents can flow). Semiconductors are the perfect thing for gating currents (their conductance depends very strongly and non-linearly on the applied gate voltage, which acts as a switch). Our new material, Q, will be then the natural home for the processing of quantum information.

  31. In fact, above some critical temperature quasi-particle pairs will spontaneously arise from the ground state (“vacuum”) and the little creature will be doing its own unstructured quantum computation. Perhaps like a small child idly watching a stream, its “thoughts” will randomly be drawn this way and that: “thinking” about nothing really, but “thinking” more deeply than we poor classical beings could ever hope to. We will take this dreamy, brilliant child and freeze her to a temperature which halts these natural “thoughts” and then (with an STM tip, pull one charged quasi-particle around a multitude of pinned quasi-particles and so) impose our own program on her “mind”. This will be the quantum computer.

  32. I’d like to close with a few screens showing our candidate architecture for the quantum computing material. Also, I have a demo (written by Dimitar Jetchev – Harvard undergraduate) which allows the user to classically explore the electron fluctuations mandated by the Hamiltonian operator which governs 2-dimensional model. The Hamiltonian describes on-site and coulomb potentials together with tunneling amplitudes for a population of electrons filling a 2-dimensional KAGOME crystal.

  33. Locating Topological Phases Inside Hubbard Type Models. Kirill Shtengel Chetan Nayak MichaelFreedman

  34. Hubbard Model In our model the sites (atoms) are arrayed on the Kagome lattice The colors encode differing chemical potentials . Tunneling amplitudes tab also vary with colors. c

  35. We work with an equivalent triangular representation. • In this representation particles (e.g. electrons) live on edges.

  36. Hamiltonian Ground State Manifold H=H1/6= {all particle positions} (U0 large) {one particle per bond} D ={dimer cover T} Now small terms: j

  37. Review - Perturbation Theory function of l don’t like: perturbed, but can recurse dynamic, off diag. terms of projectors . . diagonal terms of projectors balanced to keep

  38. We have an “occupation model” at 1/6 fill. For example, imagine that each green atom has donated one electron which is now free to localize near any atom = site of Kagome (K).Let’s look at a “game”.

More Related