1 / 28

Approximation Algorithms for Path-Planning Problems

Approximation Algorithms for Path-Planning Problems. with Nikhil Bansal, Avrim Blum, David Karger, Adam Meyerson, Maria Minkoff. Shuchi Chawla. The Lost-Wallet Problem. How should you go about finding a lost wallet? Several possible locations; different likelihoods

Télécharger la présentation

Approximation Algorithms for Path-Planning Problems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Approximation Algorithms for Path-Planning Problems with Nikhil Bansal, Avrim Blum, David Karger, Adam Meyerson, Maria Minkoff Shuchi Chawla

  2. The Lost-Wallet Problem • How should you go about finding a lost wallet? • Several possible locations; different likelihoods • Task: Find a good search strategy • visit a lot of places having high likelihood • faster search  greater likelihood of discovery • find it before someone else does ! • The trick-or-treaters problem • Collect as much candy as possibly between 6 and 8 • Mr. X always gives more candy than Mrs. Y Shuchi Chawla, Carnegie Mellon University

  3. The Lost-Wallet Problem • Model as a graph problem • vertices are locations; labeled by likelihoods • edge lengths represent time taken to go from one location to another • Classic formulation – Traveling Salesman Find the shortest tour covering all locations • Some complicating constraints • The wallet could get stolen before you find it! • No candy after 8pm • Cover as many locations as possible • Give preference to the more likely ones Shuchi Chawla, Carnegie Mellon University

  4. A probabilistic view Discounted-Reward TSP Orienteering • At every time step, there is a fixed probability (1-) that the wallet gets stolen • If a location with value (likelihood)  is visited at time t, the expected likelihood of discovery is t • Goal: Construct a path such that the total discounted reward collected is maximized “Discounted Reward” Alternately, you can only search for time D  Goal: Construct path of length at most D that collects maximum reward  Shuchi Chawla, Carnegie Mellon University

  5. A time-reward trade-off Orienteering Dis. Rew. TSP reward • Given weighted graph G, root s, reward on nodes v • Construct a path P rooted at s • High level objective: Collect large reward in little time • Orienteering Maximize reward collected with path of length D • Discounted-Reward TSP Reward from node v, if reached at time t is vt time Shuchi Chawla, Carnegie Mellon University

  6. A time-reward trade-off • Given weighted graph G, root s, reward on nodes v • Construct a path P rooted at s • High level objective: Collect large reward in little time • Orienteering Maximize reward collected with path of length D • Discounted-Reward TSP Reward from node v, if reached at time t is vt A related problem… • K-Traveling Salesperson Minimize length while collecting at least K in reward No approximation algorithm known previously for the rooted non-geometric version [GLV87] [Balas89] [AMN98] New problem Best: (2+)-approx [Garg99] [AK00] … Shuchi Chawla, Carnegie Mellon University

  7. The FEDEX-guy Problem • The deliveryman has to deliver packages to various locations • Packages have time-windows for delivery • Some packages have higher priority than others • Deliver as many packages in their time-windows, as possible • Metric of success: total reward from packages delivered on time The Time-Window Problem Shuchi Chawla, Carnegie Mellon University

  8. The Time-Window Problem • Find a path that visits many nodes in their time-window • Widely studied in scheduling and OR literature • Constant-approx known for points on a line, few different time-windows • No approximation known for the general case • A special case – The Deadline-TSP Problem • Vertices only have deadlines • All “release-times” are 0. Shuchi Chawla, Carnegie Mellon University

  9. Our results Problem Approximation K-path [Chaudhuri et al ’03] 2+ min=kℓ(P) Min-Excess Path 2+ Orienteering “point-to-point” 3 maxℓ(P)=D(P) Discounted-Reward TSP 6.75+ max ∑t Deadline-TSP 3 log n max ∑t≤D(v)v Time-Window Problem 3 log2n max ∑R(v)≤t≤D(v)v Shuchi Chawla, Carnegie Mellon University

  10. The rest of this talk • Point-to-point Orienteering and D-R TSP • Why is this difficult? • The Min-Excess problem and how to solve it • Using Min-Excess to solve Orienteering & D-R TSP • The Time-Window Problem • Orienteering with deadlines • Incorporating release-dates • Extensions and Open Problems Shuchi Chawla, Carnegie Mellon University

  11. Why is Orienteering difficult? K-path min=kℓ(P) Orien. maxℓ(P)=D(P) DR-TSP max ∑t • First attempt – Use distance-based approximations to approximate reward • Let OPT(d) = max achievable reward with length d • A 2-approx for distance implies that ALG(d) ≥ OPT(d/2) • However, we may have OPT(d/2) << OPT(d) • Bad trade-off between distance and reward! OPT(d) s APPROX Shuchi Chawla, Carnegie Mellon University

  12. Why is Orienteering difficult? K-path min=kℓ(P) Orien. maxℓ(P)=D(P) DR-TSP max ∑t • First attempt – Use distance-based approximations to approximate reward • Idea – Modify the algorithm itself • Doesn’t help – moat-growing always goes for shallow fruit • Orienteering is inherently harder; Perturbation of the input changes the output widely • Same problem with Discounted-Reward TSP • Multiplying the exponent with a constant gives a bad approximation OPT(d) s APPROX Shuchi Chawla, Carnegie Mellon University

  13. Why is Orienteering difficult? K-path min=kℓ(P) Orien. maxℓ(P)=D(P) DR-TSP max ∑t t s • Second attempt – approximate subparts of the optimal path and shortcut other parts • If we stray away from the optimal path by a lot, we may not be able to cover reward that’s far away • Approximate the “extra” length taken by a path over the shortest path length OPT APPROX Shuchi Chawla, Carnegie Mellon University

  14. Why is Orienteering difficult? K-path min=kℓ(P) Orien. maxℓ(P)=D(P) DR-TSP max ∑t Min-Excess Path Problem • Second attempt – approximate subparts of the optimal path and shortcut other parts • If we stray away from the optimal path by a lot, we may not be able to cover reward that’s far away • Approximate the “extra” length taken by a path over the shortest path length • If OPT obtains k reward with length d+, ALG should obtain the same reward with length d+ Shuchi Chawla, Carnegie Mellon University

  15. The Min-Excess Problem K-path min=kℓ(P) Excess min=k(P) • Given graph G, start and end nodes s, t, reward on nodes v • Find a path from s to t collecting K reward and minimizing ℓ(P) – d(s,t) • At optimality, this is exactly the same as the K-path objective of minimizing ℓ(P) • However, approximation is different • -approx to K-path : ℓ(P) • -approx to min-excess : d + (ℓ(P) – d) = ℓ(P) – (-1)d • Min-excess is strictly harder than K-path Shuchi Chawla, Carnegie Mellon University

  16. Solving Min-Excess K-path min=kℓ(P) Excess min=k(P) • OPT = d+; k-path gives us ALG = (d+) We want ALG = d +  • Note: When ≈d, (d+) ≈ d + O() • Idea: When  is large, approximate using k-path • What if  << d ? • Small   path is almost like a shortest path or “its distance from s mostly increases monotonically” Shuchi Chawla, Carnegie Mellon University

  17. Solving Min-Excess K-path min=kℓ(P) Excess min=k(P) • OPT = d+; k-path gives us ALG = (d+) We want ALG = d +  • Note: When ≈d, (d+) ≈ d + O() • Idea: When  is large, approximate using k-path • What if  << d ? • Small   path is almost like a shortest path or “its distance from s mostly increases monotonically” • Idea: Completely monotone path  use dynamic programming to solve exactly! • Binary decision for each vertex – should it be in the path or not? • Compute P(vj,t) = the “best” path that has length t and ends at vj • P(vj+1,t) == • consider P(u,t’), where t’ = t-ℓ(u,vj+1) • pick the best path (best u) from the above Shuchi Chawla, Carnegie Mellon University

  18. Solving Min-Excess K-path min=kℓ(P) Excess min=k(P) Dynamic Program Approximate • Idea: When  is large, approximate using k-path • Idea: Completely monotone path  use dynamic programming to solve exactly! Patch segments using dynamic programming t s OPT wiggly wiggly monotone monotone monotone Shuchi Chawla, Carnegie Mellon University

  19. From Min-Excess to Orienteering Orien. maxℓ(P)=D(P) Excess min=k(P) t 3 s 1 2 • There exists a path from s to t, that • collects reward at least  • has length  D • Given a 3-approximation to min-excess: 1. Divide into 3 “equal-reward” parts (hypothetically) 2. Approximate the part with the smallest excess • Using an r-approx for Min-excess ( r Z+ ), we get an r-approximation for s-t Orienteering Excess of path P (P) = dP(u,v)– d(u,v) v2 OPT v1 APPROX Excess of one path · (1+2+3)/3 Can afford an excess up to (1+2+3) Shuchi Chawla, Carnegie Mellon University

  20. Solving Discounted-Reward TSP DR-TSP max ∑t Excess min=k(P) t s v excess = 1 • WLOG,  = ½. Reward of v at time t = vt • An interesting observation: OPT collects half of its reward before the first node that has excess 1 • Therefore, approximate the min-excess from s to v • New path has excess 3. Reward  by factor of 23. • 16-approximation ’ = 2OPT(v,t) > OPT reward OPT/2 OPT length of entire remaining path decreases by 1 Shuchi Chawla, Carnegie Mellon University

  21. So far… • (2+)-approximation for Min-excess • 3-approximation for Orienteering • (6.75+)-approximation for Discounted-Reward TSP • You learnt how to look for your lost wallet should’ve Shuchi Chawla, Carnegie Mellon University

  22. Deadline-TSP • Every vertex has a deadline D(v); Find a path that maximizes nodes v visited before D(v) • If a path has length smaller than the minimum deadline, use Orienteering to approximate the reward in that path • Everything visited before the minimum deadline • Don’t need to bother about deadlines of other nodes • Does OPT always have a large subpath with the above property? • There are many subpaths of OPT with the above property that together contain all the reward NO! Shuchi Chawla, Carnegie Mellon University

  23. A segmentation of OPT Deadline Time Shuchi Chawla, Carnegie Mellon University

  24. Deadline-TSP • Segment graph into many parts, approximate each using Orienteering and patch them together • How do we find such a segmentation without knowing the optimal path? • In order to avoid double-counting of reward, segments should be node-disjoint • Our result – There exists a segmentation based only on deadlines, such that the resulting solution is a (3 log n)-approximation Shuchi Chawla, Carnegie Mellon University

  25. From Deadlines to Time-Windows t s s t • Nodes have deadlines as well as release times • Note that release times are dual to deadlines – if we look at the path from the end to the start, release times become deadlines! • Log-approximation for deadlines  log-approximation for release dates • Algorithm for Time-Windows: • Run the approximation for Deadline-TSP • Replace Orienteering by Orienteering with release-dates • O(log2n)-approximation for the Time-Window problem ℓ(OPT) = L D(v) = L-R(v) OPT v Require ℓ(s,v)  R(v)  ℓ(t,v)  L-R(v) Shuchi Chawla, Carnegie Mellon University

  26. Our results Problem Approximation Min-Excess Path 2+ Orienteering (point-to-point) 3 maxℓ(P)=D(P) Discounted-Reward TSP 6.75+ max ∑t Deadline-TSP 3 log n max ∑t≤D(v)v Time-Window Problem 3 log2n max ∑R(v)≤t≤D(v)v Shuchi Chawla, Carnegie Mellon University

  27. Some extensions • Unrooted versions • Multiple tours • Max-reward Steiner tree of bounded size Shuchi Chawla, Carnegie Mellon University

  28. Future work… • Improve the approximations • 2-approx for Orienteering? • Constant factor for Deadline-TSP • The Time-Window problem Can we get a constant or O(log n) in general graphs? Shuchi Chawla, Carnegie Mellon University

More Related