1 / 14

Algorithms Lecture 10

Algorithms Lecture 10. Lecturer: Moni Naor. Linear Programming in Small Dimension. Canonical form of linear programming Maximize: c 1 ¢ x 1 + c 2 ¢ x 2 … c d ¢ x d Subject to: a 1,1 ¢ x 1 + a 1,2 ¢ x 2 … a 1,d ¢ x d · b 1

libbyl
Télécharger la présentation

Algorithms Lecture 10

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AlgorithmsLecture 10 Lecturer:Moni Naor

  2. Linear Programming in Small Dimension Canonical form of linear programming Maximize: c1 ¢x1 + c2 ¢ x2 … cd ¢xd Subject to: a1,1 ¢ x1 + a1,2 ¢ x2 … a1,d ¢ xd· b1 a2,1 ¢ x1 + a2,2 ¢ x2 … a2,d ¢ xd· b1 ... an,1 ¢ x1 + an,2 ¢ x2 … an,d ¢ xd· bn n – number of constraints and d – number of variables or dimension

  3. Linear Programming in Two Dimensions Feasible region Optimal vertex

  4. What is special in low dimension • Only d constraints determine the solution • The optimal value on those d constraints determine the global one • Problem is reduced to finding those constraints that matter • We know that equality hold in those constraints • Generic algorithms: • Fourier-Motzkin: (n/2)2d • Worst case of Simplex: number of bfs/vertices (nd )

  5. Key Observation • If we know that an inequality constraints is defining: • we can reduce the number variables • Projection • Substitution 4X1 - 6x 2 =4 Feasible region Optimal vertex

  6. Incremental Algorithm B Input: A set of n constraints H on d variables Output: the set of defining constraints 0. If |H|=d output B(H)=H • Pick a random constraint If h 2 H recursively find B(H \ h) 2. If B(H \ h) does not violate h output B(H \ h) else project all the constraints onto h and recursively solve this (n-1,d-1) lp program

  7. Correctness: by induction… Termination: if the non defining constraints chosen No need to rerun Analysis: probability that his one the defining constraints is d/n 0. If |H|=d output B(H)=H Pick a random constraint If h 2 H recursively find B(H \ h) 2. If B(H \ h) does not violate h output B(H \ h) else project all the constraints onto h and recursively solve this (n-1,d-1) lp program Correctness, Termination and Analysis

  8. Analysis: probability that his one the defining constraints is d/n T(d,n) = d/n T(d-1,n-1) + T(d,n-1) by induction = d/n (d-1)! (n-1) + d! (n-1) = (n-1)d!(1/n +1) · nd! 0. If |H|=d output B(H)=H Pick a random constraint If h 2 H recursively find B(H \ h) 2. If B(H \ h) does not violate h output B(H \ h) else project all the constraints onto h and recursively solve this (n-1,d-1) lp program Analysis

  9. The algorithm is wasteful: When the solution does not fit the new a new is computed from scratch 0. If |H|=d output B(H)=H Pick a random constraint If h 2 H recursively find B(H \ h) 2. If B(H \ h) does not violate h output B(H \ h) else project all the constraints onto h and recursively solve this (n-1,d-1) lp program How to improve

  10. Random Sampling idea Build the basis by adding the constraints in a manner related to history Input: A set of n constraints H on d variables Output: the set of defining constraints 0. If |H|= c d2return simplex on H S Ã Repeat • Pick random R ½ H of size r • Solve recursively on S [ R solution is u • V = set of constraints in H violated by u • If |V| · t, then S Ã S [ V Until V= 

  11. Correctness, Termination and Analysis Claim: Each time we augment S (S Ã S [ V), we add to S a new constraint from the ``real” basis B of H • If u did not violate any constraint in B it would be optimal • So V must contain an element from B which was not in S before • Since |B|=d, we can augment S at most d times • Therefore the number of constraints in the recursive call is |R|+|S| · r +dt • Important factor for analysis: what is the probability of successful augmentation

  12. Sampling Lemma For any H and S ½ H The expected (over R) number of constraints V that violate u (optimum on S [ R) is at most nd/r Proof Let X(R,h) be 1 iff h violated h(S [ R) Need to bound ER [h X(R,h)] = 1 / #R |R|=rh X(R,h) instead consider all subsets Q = R [ h of size r+1 = 1 / #R |Q|=r+1h2 Q X(Q\{h},h) = (#Q/#R) (r+1)¢ ProbQ,h 2 Q X(Q\{h},h) · n ¢ d / (r+1)

  13. Analysis • Setting t= 2 nd /r implies (from Markov’s inequality): • number of recursive call until a successful(V · t) augmentation is constant Number of constraints in recursive call bounded by r+O(d2n/r) Setting r=d n1/2 means that this isO(r) Total expected running time T(n) · 2 d T(d n1/2 ) + O(d2 n) Result O( (log n)log d (Simplex time) ) + O(d2 n) Can be improved to O(dd +d2n) Can be improved to O(dd1/2 +d2n) using [Kalai, Matousek-Sharir-Welzl]

  14. References • Motwani and Raghavan, Randomized Algorithms Chapter 9.10 • Michael Goldwasser, A Survey of Linear Programming in Randomized Subexponential Timehttp://euler.slu.edu/~goldwasser/publications/SIGACT1995_Abstract.html • Pioter Indyk’s course at MIT, Geometric Computation http://theory.lcs.mit.edu/~indyk/6.838/ • Applet: http://web.mit.edu/ardonite/6.838/linear-programming.htm

More Related