1 / 78

Unifying Local and Exhaustive Search

Unifying Local and Exhaustive Search. John Hooker Carnegie Mellon University September 2005. Exhaustive vs. Local Search. They are generally regarded as very different. Exhaustive methods examine every possible solution, at least implicitly. Branch and bound, Benders decomposition.

avon
Télécharger la présentation

Unifying Local and Exhaustive Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Unifying Local and Exhaustive Search John Hooker Carnegie Mellon University September 2005

  2. Exhaustive vs. Local Search • They are generally regarded as very different. • Exhaustive methods examine every possible solution, at least implicitly. • Branch and bound, Benders decomposition. • Local search methods typically examine only a portion of the solution space. • Simulated annealing, tabu search, genetic algorithms, GRASP (greedy randomized adaptive search procedure).

  3. Exhaustive vs. Local Search • However, exhaustive and local search are often closely related. • “Heuristic algorithm” = “search algorithm” • “Heuristic” is from the Greek ‘ευριςτέιν (to search, to find). • Two classes of exhaustive search methods are very similar to corresponding local search methods: • Branching methods. • Nogood-based search.

  4. Why Unify Exhaustive & Local Search? • Encourages design of algorithms that have several exhaustive and inexhaustive options. • Can move from exhaustive to inexhaustive options as problem size increases. • Suggests how techniques used in exhaustive search can carry over to local search. • And vice-versa.

  5. Why Unify Exhaustive & Local Search? • We will use an example (traveling salesman problem with time windows)to show: • Exhaustive branching can suggest a generalization of a local search method (GRASP). • The bounding mechanism in branch and bound also carries over to generalized GRASP. • Exhaustive nogood-based search can suggest a generalization of a local search method (tabu search).

  6. Outline • Branching search. • Generic algorithm (exhaustive & inexhaustive) • Exhaustive example: Branch and bound • Inexhaustive examples: Simulated annealing, GRASP. • Solving TSP with time windows • Using exhaustive branching & generalized GRASP. • Nogood-based search. • Generic algorithm (exhaustive & inexhaustive) • Exhaustive example: Benders decomposition • Inexhaustive example: Tabu search • Solving TSP with time windows • Using exhaustive nogood-based search and generalized tabu search.

  7. Branching Search • Each node of the branching tree corresponds to a restriction P of the original problem. • Restriction = constraints are added. • Branch by generating restrictions of P. • Add a new leaf node for each restriction. • Keep branching until problem is “easy” to solve. • Notation: • feas(P) = feasible set of P • relax(P) = a relaxation of P

  8. Branching Search Algorithm • Repeat while leaf nodes remain: • Select a problem P at a leaf node. • If P is “easy” to solve then • If solution of P is better than previous best solution, save it. • Remove P from tree. • Else • If optimal value of relax(P) is better than previous best solution, then • If solution of relax(P) is feasible for P then P is “easy”; save the solution and remove P from tree • Else branch. • Else • Remove P from tree

  9. Branching Search Algorithm • To branch: • If set restrictions P1, …, Pk of P so far generated is complete, then • Remove P from tree. • Else • Generate new restrictions Pk+1, …, Pmand leaf nodes for them.

  10. Branching Search Algorithm • To branch: • If set restrictions P1, …, Pk of P so far generated is complete, then • Remove P from tree. • Else • Generate new restrictions Pk+1, …, Pmand leaf nodes for them. • Exhaustive vs. heuristic algorithm • In exhaustive search, “complete” = exhaustive • In a heuristic algorithm, “complete”  exhaustive

  11. Exhaustive search:Branch and bound Every restriction P is initially too “hard” to solve. So, solve LP relaxation.If LP solution is feasible for P, then P is “easy.” Original problem Previously removed nodes Leaf node P Currently at this leaf node

  12. Exhaustive search:Branch and bound Every restriction P is initially too “hard” to solve. So, solve LP relaxation.If LP solution is feasible for P, then P is “easy.” Original problem Previously removed nodes Leaf node P Currently at this leaf node .... Pk P2 P1 Previously removed nodes

  13. Exhaustive search:Branch and bound Every restriction P is initially too “hard” to solve. So, solve LP relaxation.If LP solution is feasible for P, then P is “easy.” Original problem Previously removed nodes Leaf node P Currently at this leaf node .... Create more branches if value of relax(P) is better than previous solution and P1, …, P2 are not exhaustive Pk P2 P1 Previously removed nodes

  14. Heuristic algorithm:Simulated annealing Original problem Currently at this leaf node, which was generated because {P1, …, Pk} is not complete .... P2 Pksolution = x P1 P Previously removed nodes Search tree has 2 levels. Second level problems are always “easy” to solve by searching neighborhood of previous solution.

  15. Heuristic algorithm:Simulated annealing Original problem Currently at this leaf node, which was generated because {P1, …, Pk} is not complete .... P2 Pksolution = x P1 P Previously removed nodes feas(P) = neighborhood of x Randomly select y  feas(P)Solution of P =y if y is better than x; otherwise, y with probability p, x with probability 1  p. Search tree has 2 levels. Second level problems are always “easy” to solve by searching neighborhood of previous solution.

  16. Heuristic algorithm:GRASP Greedy randomized adaptive search procedure Original problem x1 = v1 Greedy phase:select randomized greedy values until all variables fixed. x2 = v2 x3 = v3 P “Hard” to solve.relax(P) contains no constraints x3 = v3 “Easy” to solve. Solution = v.

  17. Heuristic algorithm:GRASP Greedy randomized adaptive search procedure Original problem .... x1 = v1 Pk Greedy phase:select randomized greedy values until all variables fixed. P P2 P1 x2 = v2 Local search phase x3 = v3 P feas(P2) = neighborhood of v “Hard” to solve.relax(P) contains no constraints x3 = v3 “Easy” to solve. Solution = v.

  18. Heuristic algorithm:GRASP Greedy randomized adaptive search procedure Original problem .... x1 = v1 Pk Greedy phase:select randomized greedy values until all variables fixed. P P2 P1 Stop local search when “complete” set of neighborhoods have been searched. Process now starts over with new greedy phase. x2 = v2 Local search phase x3 = v3 P feas(P2) = neighborhood of v “Hard” to solve.relax(P) contains no constraints x3 = v3 “Easy” to solve. Solution = v.

  19. An Example TSP with Time Windows • A salesman must visit several cities. • Find a minimum-length tour that visits each city exactly once and returns to home base. • Each city must be visited within a time window.

  20. An Example TSP with Time Windows [20,35] 5 Home base A B Timewindow 4 7 6 8 3 5 E C [25,35] [15,25] 6 5 7 D [10,30]

  21. Relaxation of TSP Suppose that customers x0, x1, …, xk have been visited so far. Let tij = travel time from customer i to j . Then total travel time of completed route is bounded below by Earliest time vehicle can leave customer k Min time from last customer back to home Min time from customer j ’s predecessor to j

  22. xk j x2 x1 x0 Earliest time vehicle can leave customer k Min time from last customer back to home Min time from customer j ’s predecessor to j

  23. Exhaustive Branch-and-Bound A     A Sequence of customers visited

  24. Exhaustive Branch-and-Bound A     A AB    A AE    A AC    A AD    A

  25. Exhaustive Branch-and-Bound A     A AB    A AE    A AC    A AD    A ADC   A ADB   A ADE   A

  26. Exhaustive Branch-and-Bound A     A AB    A AE    A AC    A AD    A ADC   A ADB   A ADE   A ADCBEAFeasibleValue = 36

  27. Exhaustive Branch-and-Bound A     A AB    A AE    A AC    A AD    A ADC   A ADB   A ADE   A ADCBEAFeasibleValue = 36 ADCEBAFeasibleValue = 34

  28. Exhaustive Branch-and-Bound A     A AB    A AE    A AC    A AD    A ADC   A ADB   ARelaxation value = 36Prune ADE   A ADCBEAFeasibleValue = 36 ADCEBAFeasibleValue = 34

  29. Exhaustive Branch-and-Bound A     A AB    A AE    A AC    A AD    A ADC   A ADB   ARelaxation value = 36Prune ADE   ARelaxation value = 40Prune ADCBEAFeasibleValue = 36 ADCEBAFeasibleValue = 34

  30. Exhaustive Branch-and-Bound A     A AC    ARelaxation value = 31 AB    A AE    A AD    A ADC   A ADB   ARelaxation value = 36Prune ADE   ARelaxation value = 40Prune Continue in this fashion ADCBEAFeasibleValue = 36 ADCEBAFeasibleValue = 34

  31. Exhaustive Branch-and-Bound Optimal solution

  32. Generalized GRASP A     A Sequence of customers visited Basically, GRASP = greedy solution + local search Begin with greedy assignments that can be viewed as creating “branches”

  33. Generalized GRASP A     A Greedy phase Basically, GRASP = greedy solution + local search Begin with greedy assignments that can be viewed as creating “branches” AD    A Visit customer than can be served earliest from A

  34. Generalized GRASP A     A Greedy phase Basically, GRASP = greedy solution + local search Begin with greedy assignments that can be viewed as creating “branches” AD    A ADC   A Next, visit customer than can be served earliest from D

  35. Generalized GRASP A     A Greedy phase Basically, GRASP = greedy solution + local search Begin with greedy assignments that can be viewed as creating “branches” AD    A ADC   A Continue until all customers are visited. This solution is feasible. Save it. ADCBEAFeasibleValue = 34

  36. Generalized GRASP A     A Local search phase Basically, GRASP = greedy solution + local search Begin with greedy assignments that can be viewed as creating “branches” AD    A Backtrack randomly ADC   A ADCBEAFeasibleValue = 34

  37. Generalized GRASP A     A Local search phase Basically, GRASP = greedy solution + local search Begin with greedy assignments that can be viewed as creating “branches” AD    A ADC   A Delete subtree already traversed ADCBEAFeasibleValue = 34

  38. Generalized GRASP A     A Local search phase Basically, GRASP = greedy solution + local search Begin with greedy assignments that can be viewed as creating “branches” AD    A ADC   A ADE   A Randomly select partial solution in neighborhood of current node ADCBEAFeasibleValue = 34

  39. Generalized GRASP A     A Greedy phase AD    A ADC   A ADE   A Complete solution in greedy fashion ADCBEAFeasibleValue = 34 ADEBCAInfeasible

  40. Generalized GRASP A     A Local search phase Randomly backtrack AD    A ADC   A ADE   A ADCBEAFeasibleValue = 34 ADEBCAInfeasible

  41. Generalized GRASP A     A AB    A AD    A ADC   A ADE   A ABD   A Continue in similar fashion ADCBEAFeasibleValue = 34 ADEBCAInfeasible ABDECAInfeasible Exhaustive search algorithm (branching search) suggests a generalization of a heuristic algorithm (GRASP).

  42. Generalized GRASP with relaxation A     A Greedy phase Exhaustive search suggests an improvement on a heuristic algorithm: use relaxation bounds to reduce the search. AD    A

  43. Generalized GRASP with relaxation A     A Greedy phase AD    A ADC   A

  44. Generalized GRASP with relaxation A     A Greedy phase AD    A ADC   A ADCBEAFeasibleValue = 34

  45. Generalized GRASP with relaxation A     A Local search phase AD    A Backtrack randomly ADC   A ADCBEAFeasibleValue = 34

More Related