1 / 33

CS B553: Algorithms for Optimization and Learning

CS B553: Algorithms for Optimization and Learning. Linear programming, quadratic programming, sequential quadratic programming. Key ideas. Linear programming Simplex method Mixed-integer linear programming Quadratic programming Applications. Radiosurgery. CyberKnife (Accuray).

trevor
Télécharger la présentation

CS B553: Algorithms for Optimization and Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS B553: Algorithms for Optimization and Learning Linear programming, quadratic programming, sequential quadratic programming

  2. Key ideas • Linear programming • Simplex method • Mixed-integer linear programming • Quadratic programming • Applications

  3. Radiosurgery CyberKnife (Accuray)

  4. Normal tissue Tumor Tumor Radiologically sensitive tissue

  5. Tumor Tumor

  6. Tumor Tumor

  7. Optimization Formulation • Dose cells (xi,yj,zk) in a voxel grid • Cell class: normal, tumor, or sensitive • Beam “images”: B1,…,Bndescribing dose absorbed at each cell with maximum power • Optimization variables: beam powers x1,…,xn • Constraints: • Normal cells: DijkDnormal • Sensitive cells: DijkDsensitive • Tumor cells: DminDijkDmax • 0 xb 1 • Dose calculation: • Objective: minimize total dose

  8. Linear Program • General form min fTx+gs.t. A x  b C x = d A convex polytope A slice through the polytope

  9. Three cases Infeasible Feasible, bounded Feasible, unbounded f f f ? x* x*

  10. Simplex Algorithm (Dantzig) • Start from a vertex of the feasible polytope • “Walk” along polytopeedges while decreasing objective on each step • Stop when the edge is unbounded or no improvement can be made • Implementation details: • How to pick an edge (exiting and entering) • Solving for vertices in large systems • Degeneracy: no progress made due to objective vector being perpendicular to edges

  11. Computational Complexity • Worst case exponential • Average case polynomial (perturbed analysis) • In practice, usually tractable • Commercial software (e.g., CPLEX) can handle millions of variables/constraints!

  12. Soft Constraints Penalty Normal Dose Sensitive Tumor

  13. Soft Constraints Auxiliary variable zijk: penalty at each cell zijkc(Dijk– Dnormal) zijk Dose zijk0 Dijk

  14. Soft Constraints Auxiliary variable zijk: penalty at each cell zijkc(Dijk– Dnormal) zijk fijk Dose zijk0 Introduce term in objective to minimize zijk

  15. Minimizing an Absolute Value • Absolute value Objective minx |x1| s.t. Ax  b Cx = d x1 Constraints minv,xv s.t. Ax  b Cx = d x1 v -x1  v x1

  16. Minimizing an L-1 or L-inf norm Feasible polytope,projected thru F • L1 norm • Lnorm Fx* minx ||Fx-g||1 s.t. Ax  b Cx = d g Feasible polytope,projected thru F minx ||Fx-g|| s.t. Ax  b Cx = d Fx* g

  17. Minimizing an L-1 or L-inf norm Feasible polytope,projected thru F • L1 norm Fx* minx ||Fx-g||1 s.t. Ax  b Cx = d e g mine,x1Te s.t. Fx + Ie g Fx - Ie g Ax  b Cx = d

  18. Minimizing an L-2 norm Feasible polytope,projected thru F • L2 norm Fx* minx ||Fx-g||2 s.t. Ax  b Cx = d g Not a linear program!

  19. Quadratic Programming • General form min ½ xTHx + gTx + hs.t. A x  b C x = d Objective: quadratic form Constraints: linear

  20. Quadratic programs • H positive definite Feasible polytope H-1 g

  21. Quadratic programs • H positive definite Optimum can lie off of a vertex! H-1 g

  22. Quadratic programs • H negative definite Feasible polytope

  23. Quadratic programs • H positive semidefinite Feasible polytope

  24. Simplex Algorithm For QPs • Start from a vertex of the feasible polytope • “Walk” along polytopefacets while decreasing objective on each step • Stop when the facet is unbounded or no improvement can be made • Facet: defined by mn constraints • m=n: vertex • m=n-1: line • m=1: hyperplane • m=0: entire space

  25. Active Set Method • Active inequalities S=(i1,…,im) • Constraints ai1Tx = bi1, …aimTx= bim • Written as ASx – bS= 0 • Objective ½ xTHx + gTx + f • Lagrange multipliers  = (1,…,m) • Hx + g + AST = 0 • Asx - bS = 0 • Solve linear system: If x violates a different constraint not in S, add it If k<0 , then drop ik from S

  26. Properties of active set methods for QPs • Inherits properties of simplex algorithm • Worst case: exponential number of facets • Positive definite H: polynomial time in typical case • Indefinite or negative definite H: can be exponential time! • NP complete problems

  27. Applying QPs to Nonlinear Programs • Recall: we could convert an equality constrained optimization to an unconstrained one, and use Newton’s method • Each Newton step: • Fits a quadratic form to the objective • Fits hyperplanes to each equality • Solves for a search direction (x,) using the linear equality-constrained optimization How about inequalities?

  28. Sequential Quadratic Programming • Idea: fit half-space constraints to each inequality • g(x)  0 becomes g(xt) + g(xt)T(x-xt)  0 xt g(x)  0 • g(xt) + g(xt)T(x-xt)  0

  29. Sequential Quadratic Programming • Given nonlinear minimization • minx f(x)s.t.gi(x)  0, for i=1,…,mhj(x) = 0, for j=1,…,p • At each step xt, solve QP • minx ½xTx2L(xt,t,t)x + xL(xt,t,t)Txs.t.gi(xt) + gi(xt)Tx 0 for i=1,…,mhj(xt) + hj(xt)Tx= 0 for j=1,…,p • To derive the search direction x • Directions  and  are taken from QP multipliers

  30. Illustration xt x g(x)  0 • g(xt) + g(xt)T(x-xt)  0

  31. Illustration x xt+1 g(x)  0 • g(xt+1) + g(xt+1)T(x-xt+1)  0

  32. Illustration x xt+2 g(x)  0 • g(xt+2) + g(xt+2)T(x-xt+2)  0

  33. SQP Properties • Equivalent to Newton’s method without constraints • Equivalent to Lagrange root finding with only equality constraints • Subtle implementation details: • Does the endpoint need to be strictly feasible, or just up to a tolerance? • How to perform a line search in the presence of inequalities? • Implementation available in Matlab. FORTRAN packages too =(

More Related