1 / 19

Geometry of Online Packing Linear Programs

Geometry of Online Packing Linear Programs. Marco Molinaro and R. Ravi Carnegie Mellon University. Packing Integer Programs (PIPs). n. Non-negative c , A , b Max st A has entries in [0,1]. A. x. b. m. ≤. Online Packing Integer Programs.

kai-hunt
Télécharger la présentation

Geometry of Online Packing Linear Programs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Geometry of Online Packing Linear Programs Marco Molinaroand R. Ravi Carnegie Mellon University

  2. Packing Integer Programs (PIPs) n • Non-negative c, A, b • Max st • A has entries in [0,1] A x b m ≤

  3. Online Packing Integer Programs • Adversary chooses values for c, A, b • …but columns are presented in random order • …when column comes, set variable to 0/1 irrevocably • b and n are known upfront c A A A A A A x 1 b A 0 ≤ n

  4. Online Packing Integer Programs • Goal: Find feasible solution that maximizes expected value • -competitive:

  5. Previous Results • First online problem: secretary problem [Dynkin 63] • B-secretary problem (m=1, b=B, A is all 1’s) [Kleinberg 05] -competitive for • PIPs (B=min bi) [FHKMS 10, AWY] -competitive for need do not depend on n depends on n

  6. Main Question and Result • Q:Do general PIPs become more difficult for larger n? • A:No! Main result Algorithm -competitive when

  7. High-level Idea • Online PIP as learning • Improving learning error using tailored covering bounds • Geometry of PIPs that allow good covering bounds • Reduce general PIP to above • For this talk: • Every right-hand side • Show weaker bound

  8. Online PIP as Learning • Reduction to learning a classifier[DH 09] Linear classifier: given (dual) vector , 0 0 1 1 0 1 1

  9. Online PIP as Learning • Reduction to learning a classifier[DH 09] Linear classifier: given (dual) vector , Claim:If the classification 𝑥(𝑝) given by satisfies 1) 2) then 𝑥(𝑝) is (1−𝜖) optimal. Moreover, such classification always exists. [Feasible] [Packs tightly] If , then

  10. Online PIP as Learning • Reduction to learning [DH 09] Linear classifier: given (dual) vector , set Claim:If the classification 𝑥(𝑝) given by satisfies 1) 2) then 𝑥(𝑝) is (1−𝜖) optimal. Moreover, such classification always exists. [Feasible] [Packs tightly] If , then

  11. Online PIP as Learning • S fraction of columns • Compute appropriate for sampled IP • Use to classify remaining columns • Solving PIP via learning

  12. Online PIP as Learning • S fraction of columns • Compute appropriate for sampled IP • Use to classify remaining columns • Solving PIP via learning • Probability of learning good classifier: • Consider a classifier that overfills some budget: • Can only learn if sample is skewed. Happens with probability at most • At most distinct bad classifiers • Union bounding over all bad classifiers, learn bad classifier with prob. at most • When to get good classifier with high probability

  13. Online PIP as Learning • Solving PIP via learning Improve this… • Probability of learning good classification: • Consider a classification that overfills some budget: • Can only learn if sample is skewed. Happens with probability at most • At most distinct bad classifications • Union bounding over all bad classifications, learn desired good classification with prob. at least • When to get good classification with high probability

  14. Improved Learning Error • Idea 1: Covering bounds via witnesses (handling multiple bad classifiers at a time) • -witness: is a +-witness of for constraint if • Columns picked by columns picked by • Total occupation of constraint by columns picked by is -witness: similar… Total weight • Lemma: Suppose there is a witness set of size . • Then probability of learning a bad classifier is

  15. Geometry of PIPs with Small Witness Set • For some PIPs, size of witness set is at least • Idea 2: Consider PIPs whose columns lie on few () 1-d subspaces

  16. Geometry of PIPs with Small Witness Set • For some PIPs, size of witness set is at least • Idea 2: Consider PIPs whose columns lie on few () 1-d subspaces =2 • Lemma: For such PIPs, can find witness set of size

  17. Geometry of PIPs with Small Witness Set • Covering bound + witness size: it suffices • Final step: Convert any PIP into one with , loses value Algorithm -competitive when

  18. Conclusion • Guarantee for online PIPs independent of number of columns • Asymptotically matches that for single constraint version [Kleinberg 05] • Ideas • Tailored covering bound based on witnesses • Analyze geometry of columns to obtain small witness set Make the learning problem more robust Open problems • Obtain optimal ? Can do if sample columns with replacement [DJSW 11] • Generalize to AdWords-type problem • Better online models: infinite horizon? less randomness?

  19. Thank you!

More Related