1 / 27

SIMULATED ANNEALING

SIMULATED ANNEALING. Cole Ott. The next 15 minutes. I’ll tell you what Simulated Annealing is. We’ll walk through how to apply it to a Markov Chain (via an example). We’ll play with a simulation. 1. The Problem. State space Scoring function Objective: to find minimizing. Idea.

hop
Télécharger la présentation

SIMULATED ANNEALING

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SIMULATED ANNEALING Cole Ott

  2. The next 15 minutes • I’ll tell you what Simulated Annealing is. • We’ll walk through how to apply it to a Markov Chain (via an example). • We’ll play with a simulation.

  3. 1

  4. The Problem • State space • Scoring function • Objective: to find minimizing

  5. Idea • Construct a Markov Chain on with a stationary distribution that gives higher probabilities to lower-scoring states. • Run this chain for a while, until it is likely to be in a low-scoring state • Construct a new chain whose is even more concentrated on low-scoring states • Run this chain from • Repeat

  6. Idea • Construct a Markov Chain on with a stationary distribution that gives higher probabilities to lower-scoring states. • Run this chain for a while, until it is likely to be in a low-scoring state • Construct a new chain whose is even more concentrated on low-scoring states • Run this chain from • Repeat

  7. Idea • Construct a Markov Chain on with a stationary distribution that gives higher probabilities to lower-scoring states. • Run this chain for a while, until it is likely to be in a low-scoring state • Construct a new chain with an even stronger preference for states minimizing • Run this chain from • Repeat

  8. Idea • Construct a Markov Chain on with a stationary distribution that gives higher probabilities to lower-scoring states. • Run this chain for a while, until it is likely to be in a low-scoring state • Construct a new chain with an even stronger preference for states minimizing • Run this chain from • Repeat

  9. Idea • Construct a Markov Chain on with a stationary distribution that gives higher probabilities to lower-scoring states. • Run this chain for a while, until it is likely to be in a low-scoring state • Construct a new chain with an even stronger preference for states minimizing • Run this chain from • Construct another chain …

  10. Idea (cont’d) • Basically, we are constructing an inhomogeneous Markov Chain that gradually places more probability on -minimizing states • We hope that if we do this well, then approaches as

  11. Boltzmann Distribution • We will use the Boltzmann Distribution for our stationary distributions

  12. Boltzmann Distribution (cont’d) • is a normalization term • We won’t have to care about it.

  13. Boltzmann Distribution (cont’d) • is the Temperature • When is very large, approaches the uniform distribution • When is very small, concentrates virtually all probability on -minimizing states

  14. Boltzmann Distribution (cont’d) • When we want to maximize rather than minimize, we replace with

  15. Def: Annealing FUN FACT! • Annealing is a process in metallurgy in which a metal is heated for a period of time and then slowly cooled. • The heat breaks bonds and causes the atoms to diffuse, moving the metal towards its equilibrium state and thus getting rid of impurities and crystal defects. • When performed correctly (i.e. with a proper cooling schedule), the process makes a metal more homogenous and thus stronger and more ductile as a whole. • Parallels abound! Source: Wikipedia

  16. Theorem (will not prove) • Let denote the probability that a random element chosen according to is a global minimum • Then Source: Häggström

  17. 2

  18. Designing an Algorithm • Design a MC on our chosen state space with stationary distribution • Design a cooling schedule—a sequence of integers and a sequence of strictly decreasing temperature values such that for each in sequence we will run our MC at temperature for steps

  19. Notes on Cooling Schedules • Picking a cooling schedule is more of an art than a science. • Cooling too quickly can cause the chain to get caught in local minima • Tragically, cooling schedules with provably good chances of finding a global minima can require more time than it would take to actually enumerate every element in the state space

  20. Example: Traveling salesman

  21. The Traveling Salesman Problem • cities • Want to find a path (a permutation of ) that minimizes our distance function .

  22. The Traveling Salesman Problem • cities • Want to find a path (a permutation of ) that minimizes our distance function .

  23. Our Markov Chain • Pick u.a.r. vertices such that • With some probability, reverse the order of the substring on our path

  24. Our Markov Chain (cont’d) • Pick u.a.r. vertices such that • Let be the current state and let be the state obtained by reversing the substring • in • With probability • transition to , else do nothing.

  25. 3

  26. DEMO TIME!

  27. Sources "Annealing.” Wikipedia, The Free Encyclopedia, 22 Nov. 2010. Web. 3 Mar. 2011 Häggström, Olle. Finite Markov Chains and Algorithmic Applications. Cambridge University Press.

More Related