1 / 46

Inexact Primal-Dual Methods for Equality Constrained Optimization

U.S.-Mexico Workshop 2007. Inexact Primal-Dual Methods for Equality Constrained Optimization. Frank Edward Curtis Northwestern University with Richard Byrd and Jorge Nocedal January 9, 2007. Outline. Description of an Algorithm Step computation Step acceptance

mercerd
Télécharger la présentation

Inexact Primal-Dual Methods for Equality Constrained Optimization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. U.S.-Mexico Workshop 2007 Inexact Primal-Dual Methods for Equality Constrained Optimization Frank Edward Curtis Northwestern University with Richard Byrd and Jorge Nocedal January 9, 2007

  2. Outline • Description of an Algorithm • Step computation • Step acceptance • Global Convergence Analysis • Merit function and sufficient decrease • Satisfying first-order conditions • Model Problem (Haber) • Problem formulation • Numerical Results • Final Remarks • Future work • Negative Curvature

  3. Outline • Description of an Algorithm • Step computation • Step acceptance • Global Convergence Analysis • Merit function and sufficient decrease • Satisfying first-order conditions • Model Problem (Haber) • Problem formulation • Numerical Results • Final Remarks • Future work • Negative Curvature

  4. Line Search SQP Framework • Define merit function • Implement a line search

  5. Exact Case 0

  6. Exact Case 0 Exact step minimizes the objective on the linearized constraints

  7. Exact Case 0 Exact step minimizes the objective on the linearized constraints … which may lead to an increase in the model objective

  8. Exact Case 0 Exact step minimizes the objective on the linearized constraints … which may lead to an increase in the model objective … but this is ok since we can account for this conflict by increasing the penalty parameter

  9. Exact Case 0 We go directly from solving the quadratic program to obtaining a reduction in the model of the merit function! That is, either for the most recent penalty parameter or for a higher one, we satisfy the condition:

  10. Exact Case 0 Observe that the quadratic term can significantly influence the penalty parameter

  11. Inexact Case

  12. Inexact Case

  13. Inexact Case Step is acceptable if for

  14. Inexact Case Step is acceptable if for

  15. Inexact Case Step is acceptable if for

  16. Algorithm Outline • for k = 0, 1, 2, … • Iteratively solve • Until • Update penalty parameter • Perform backtracking line search • Update iterate or

  17. Termination Test • Observe KKT conditions

  18. Outline • Description of an Algorithm • Step computation • Step acceptance • Global Convergence Analysis • Merit function and sufficient decrease • Satisfying first-order conditions • Model Problem (Haber) • Problem formulation • Numerical Results • Final Remarks • Future work • Negative Curvature

  19. Assumptions • The sequence of iterates is contained in a convex set and the following conditions hold: • the objective and constraint functions and their first and second derivatives are bounded • the multiplier estimates are bounded • the constraint Jacobians have full row rank and their smallest singular values are bounded below by a positive constant • the Hessian of the Lagrangian is positive definite with smallest eigenvalue bounded below by a positive constant

  20. Sufficient Reduction to Sufficient Decrease • Taylor expansion of merit function yields • Accepted step satisfies

  21. Intermediate Results is bounded above is bounded above is bounded below by a positive constant

  22. Sufficient Decrease in Merit Function

  23. Step in Dual Space • We converge to an optimal primal solution, and (for sufficiently small and ) Therefore,

  24. Outline • Description of an Algorithm • Step computation • Step acceptance • Global Convergence Analysis • Merit function and sufficient decrease • Satisfying first-order conditions • Model Problem (Haber) • Problem formulation • Numerical Results • Final Remarks • Future work • Negative Curvature

  25. Problem Formulation • Tikhonov-style regularized inverse problem • Want to solve for a reasonably large mesh size • Want to solve for small regularization parameter • SymQMR for linear system solves • Input parameters: Recall: or

  26. Numerical Results

  27. Numerical Results

  28. Numerical Results

  29. Numerical Results

  30. Numerical Results

  31. Outline • Description of an Algorithm • Step computation • Step acceptance • Global Convergence Analysis • Merit function and sufficient decrease • Satisfying first-order conditions • Model Problem (Haber) • Problem formulation • Numerical Results • Final Remarks • Future work • Negative Curvature

  32. Review and Future Challenges • Review • Defined a globally convergent inexact SQP algorithm • Require only inexact solutions of primal-dual system • Require only matrix-vector products involving objective and constraint function derivatives • Results also apply when only reduced Hessian of Lagrangian is assumed to be positive definite • Numerical experience on model problem is promising • Future challenges • (Nearly) Singular constraint Jacobians • Inexact derivative information • Negative curvature • etc., etc., etc….

  33. Negative Curvature • Big question • What is the best way to handle negative curvature (i.e., when the reduced Hessian may be indefinite)? • Small question • What is the best way to handle negative curvature in the context of our inexact SQP algorithm? • We have no inertia information! • Smaller question • When can we handle negative curvature in the context of our inexact SQP algorithm with NO algorithmic modifications? • When do we know that a given step is OK? • Our analysis of the inexact case leads to a few observations…

  34. Why Quadratic Models?

  35. Why Quadratic Models? • Provides a good… • direction? Yes • step length? Yes • Provides a good… • direction? Maybe • step length? Maybe

  36. Why Quadratic Models? • One can use our stopping criteria as a mechanism for determining which are good directions • All that needs to be determined is whether the step lengths are acceptable

  37. Unconstrained Optimization • Direct method is the angle test • Indirect method is to check the conditions or

  38. Unconstrained Optimization • Direct method is the angle test • Indirect method is to check the conditions or step quality step length

  39. Constrained Optimization • Step quality determined by • Step length determined by or or

  40. Thanks!

  41. Actual Stopping Criteria • Stopping conditions: • Model reduction condition or

  42. Constraint Feasible Case • If feasible, conditions reduce to

  43. Constraint Feasible Case • If feasible, conditions reduce to

  44. Constraint Feasible Case • If feasible, conditions reduce to Some region around the exact solution

  45. Constraint Feasible Case • If feasible, conditions reduce to Ellipse distorted toward the linearized constraints

  46. Constraint Feasible Case • If feasible, conditions reduce to

More Related