1 / 45

Local Search Algorithms

Local Search Algorithms. Local Search. Combinatorial problems. involve finding a grouping, ordering, or assignment of a discrete set of objects which satisfies certain constraints arise in many domains of computer science and various application areas

Télécharger la présentation

Local Search Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Local Search Algorithms

  2. Local Search

  3. Combinatorial problems ... • involve finding a grouping, ordering, or assignment of a discrete set of objects which satisfies certain constraints • arise in many domains of computer science and various application areas • have high computational complexity (NP-hard) • are solved in practice by searching an exponentially large space of candidate / partial solutions

  4. Examples for combinatorial problems: • finding shortest/cheapest round trips (TSP) • planning, scheduling, time-tabling • resource allocation • protein structure prediction • genome sequence assembly • Microarchitecture’s design space exploration

  5. The Traveling Salesperson Problem (TSP) • TSP – optimization variant: • For a given weighted graph G = (V,E,w), find a Hamiltonian cycle in G with minimal weight, • i.e., find the shortest round-trip visiting each vertex exactly once. • TSP – decision variant: • For a given weighted graph G = (V,E,w), decide whether a Hamiltonian cycle with minimal weight ≤ bexists in G.

  6. TSP instance: shortest round trip through 532 US cities

  7. Search Methods Types of search methods: • systematic ←→ local search • deterministic ←→ stochastic • sequential ←→ parallel

  8. Search Methods Rezolvarea problemelor prin cautare Definirea problemelor de cautare Strategii de cautare • Strategii de cautare neinformate • Strategii de cautare informate (euristice) • Strategii de cautare locale (Hill Climbing, Simulated Annealing, Tabu Search, Algoritmi evolutivi, PSO, ACO)

  9. Tipologie • Cautarelocalasimpla- se retine o singura stare vecina • Hill climbing - alegecelmai bun vecin • Simulated annealing - alege probabilistic celmai bun vecin • Cautaretabu - retinelistasolutiilor recent vizitate • Cautarelocalaînfascicol (beam local search) – se retinmaimultestari (o populatie de stari) • Algoritmievolutivi • Optimizarebazatapecomportamentul de grup (Particle swarm optimisation) • Optimizarebazatapefurnici (Ant colony optmisation)

  10. Local Search: • start from initial position • iteratively move from current position to neighboring position • use evaluation function for guidance Two main classes: • local search on partial solutions • local search on complete solutions

  11. local search on partial solutions

  12. Local search for partial solutions • Order the variables in some order. • Span a tree such that at each level a given value is assigned a value. • Perform a depth-first search. • But, use heuristics to guide the search. Choose the best child according to some heuristics. (DFS with node ordering)

  13. Heuristics search

  14. Construction Heuristics for partial solutions • search space: space of partial solutions • search steps: extend partial solutions with assignment for the next element • solution elements are often ranked according to a greedy evaluation function

  15. Nearest Neighbor heuristic for the TSP:

  16. DFS • Once a solution has been found (with the first dive into the tree) we can continue search the tree with DFS and backtracking. • In fact, this is what we did with DFBnB. • DFBnB with node ordering.

  17. Limited Discrepancy Search • At each node, the heuristic prefers one of the children. • A discrepancy is when you go against the heuristic. • Perform DFS from the root node with k discrepancies • Start with k=0 • Then increasing k by 1 • Stop at anytime.

  18. Number of Discrepancies • Assume a binary tree where the heuristic always prefers the left child. 0 1 1 2 1 2 2 3 1

  19. Limited Discrepancy Search Advantages: • Anytime algorithm • Solutions are ordered according to heuristics

  20. local search on complete solutions

  21. Iterative Improvement (Greedy Search): • initialize search at some point of search space • in each step, move from the current search position to a neighboring position with better evaluation function value

  22. f-value states Hill climbing • Choose the nieghbor with the largest improvment as the next state f-value = evaluation(state) while f-value(state) < f-value(next-best(state)) state := next-best(state)

  23. Hill climbing function Hill-Climbing(problem) returns asolution state current Make-Node(Initial-State[problem]) loop do next a highest-valued successor of current if Value[next] <Value[current]then return current current next end

  24. Problems with local search Typical problems with local search (with hill climbing in particular) • getting stuck in local optima • being misguided by evaluation/objective function

  25. Stochastic Local Search • randomize initialization step • randomize search steps such that suboptimal/worsening steps are allowed • improved performance & robustness • typically, degree of randomization controlled by noise parameter

  26. Stochastic Local Search Pros: • for many combinatorial problems more efficient than systematic search • easy to implement • easy to parallelize Cons: • often incomplete (no guarantees for finding existing solutions) • highly stochastic behavior • often difficult to analyze theoretically / empirically

  27. Simple SLS methods Random Search (Blind Guessing): • In each step, randomly select one element of the search space. (Uninformed) RandomWalk: • In each step, randomly select one of the neighbouring positions of the search space and move there.

  28. Random restart hill climbing hill climbing עלול להיתקע בנקודה שבה אין התקדמות. נוכל לשפר אותו בצורה הבאה: • בחר בנקודה רנדומלית והרץ את hill climbing. • אם הפתרון שמצאת טוב יותר מהפתרון הטוב ביותר שנמצא עד כה – שמור אותו. • חזור ל-1. מתי נסיים? – לאחר מספר קבוע של איטרציות. – לאחר מספר קבוע של איטרציות שבהן לא נמצא שיפור לפתרון הטוב ביותר שנמצא עד כה.

  29. f-value states Random restart hill climbing f-value = evaluation(state)

  30. Randomized Iterative Improvement: • initialize search at some point of search space search steps: • with probability p, move from current search position to a randomly selected neighboring position • otherwise, move from current search position to neighboring position with better evaluation function value. • Has many variations of how to choose the randomly neighbor, and how many of them • Example: Take 100 steps in one direction (Army mistake correction) – to escape from local optima.

  31. Simulated annealing Combinatorial search technique inspired by the physical process of annealing [Kirkpatrick et al. 1983, Cerny 1985] Outline • Select a neighbor at random. • If better than current state go there. • Otherwise, go there with some probability. • Probability goes down with time (similar to temperature cooling)

  32. Simulated annealing eE/T

  33. Generic choices for annealing schedule

  34. Pseudo code function Simulated-Annealing(problem, schedule) returns solution state current Make-Node(Initial-State[problem]) for t  1 to infinity T  schedule[t] // T goes downwards. if T = 0 then return current next Random-Successor(current) Ef-Value[next] - f-Value[current] if E > 0 then current  next else current next with probability eE/T end

  35. Example application to the TSP [Johnson & McGeoch 1997] baseline implementation: • start with random initial solution • use 2-exchange neighborhood • simple annealing schedule;  relatively poor performance improvements: • look-up table for acceptance probabilities • neighborhood pruning • low-temperature starts

  36. Summary-Simulated Annealing Simulated Annealing . . . • is historically important • is easy to implement • has interesting theoretical properties (convergence), but these are of very limited practical relevance • achieves good performance often at the cost of substantial run-times

  37. Tabu Search • Combinatorial search technique which heavily relies on the use of an explicit memory of the search process [Glover 1989, 1990] to guide search process • memory typically contains only specific attributes of previously seen solutions • simple tabu search strategies exploit only short term memory • more complex tabu search strategies exploit long term memory

  38. Tabu search – exploiting short term memory • in each step, move to best neighboring solution although it may be worse than current one • to avoid cycles, tabu search tries to avoid revisiting previously seen solutions by basing the memory on attributes of recently seen solutions • tabu list stores attributes of the tl most recently visited • solutions; parameter tl is called tabu list length or tabu tenure • solutions which contain tabu attributes are forbidden

  39. Tabu Search Problem: previously unseen solutions may be tabu  use of aspiration criteria to override tabu status Stopping criteria: • all neighboring solutions are tabu • maximum number of iterations exceeded • number of iterations without improvement  Robust Tabu Search [Taillard 1991], Reactive Tabu Search[Battiti & Tecchiolli 1994–1997]

  40. Genetic algorithms • Combinatorial search technique inspired by the evolution of biological species. • population of individual solutions represented as strings • individuals within population are evaluated based on their “fitness” (evaluation function value) • population is manipulated via evolutionary operators –mutation – crossover – selection

  41. Genetic algorithms • How to generate the next generation. • 1) Selection: we select a number of states from the current generation. (we can use the fitness function in any reasonable way) • 2) crossover : select 2 states and reproduce a child. • 3) mutation: change some of the genues.

  42. General Genetic algorithm • Pop= initial population • Repeat{ • NEW_POP = EMPTY; • for i=1 to POP_SIZE{ • x=fit_individual; // natural selection • y=fit_individual; • child=cross_over(x,y); • if(small random probability) • mutate(child); • add child to NEW_POP • } • POP=NEW_POP • } UNTIL solution found • Return(best state in POP)

  43. Example

  44. 8-queen example

  45. Summary: Genetic Algorithms Genetic Algorithms • use populations, which leads to increased search space exploration • allow for a large number of different implementation choices • typically reach best performance when using operators that are based on problem characteristics • achieve good performance on a wide range of problems

More Related