1 / 9

Nonlinear Programming

Nonlinear Programming. In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization Algorithms for solving Convex Programs. Multivariable Unconstraint Optimization. max f(x 1 ,…,x n ) No functional constraints.

candie
Télécharger la présentation

Nonlinear Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization Algorithms for solving Convex Programs

  2. Multivariable Unconstraint Optimization maxf(x1,…,xn) No functional constraints. Consider the case when f is concave. The necessary and sufficient condition for optimality is that all the partial derivatives are 0. But in most cases, the system of equations obtained that way can’t be solved analytically. Then a numerical search procedure must be used.

  3. The Gradient Search Procedure The gradient at a specific point x=x’ The rate at which f increases is maximized in the direction of the gradient. Keep moving in the direction of the gradient until f stops increasing.

  4. Examples on the board.

  5. Constrained Optimization maxf(x1,…,xn) subject to gi(x1,…,xn) ≤ bi x1,…,xn ≥ 0 The necessary conditions for optimality are called the Karush-Kuhn-Tucker conditions (or KKT conditions), because they were derived independently by Karush (1939) and by Kuhn and Tucker (1951).

  6. KKT conditions The conditions are also sufficient for optimality if f is concave and gi’s are convex.

  7. KKT conditions • The KKT conditions are also sufficient for optimality if f is concave and gi’s are convex. • ui’s can be interpreted as dual variables; then KKT conditions are similar to the complementary slackness conditions of linear programming. • For relatively simple problems, KKT conditions can be used to derive an optimal solution. For example, KKT conditions are used to develop a modified simplex method for quadratic programming. • For more complicated problems, it might be impossible to derive a solution directly from KKT conditions. But they can be used to check whether a proposed solution is optimal (close to optimal).

  8. Algorithms for solving Convex Programming problems Most of the algorithms fall into one of the following three categories. • Gradient algorithms, where the gradient search procedure is modified to keep the search path penetrating any constraint boundary. • Sequential unconstrained algorithms convert the original constrained problem to a sequence of unconstrained problems whose optimal solutions converge to the optimal solution of the original problem (for example, the barrier function method used in interior-point methods for linear programming).

  9. Algorithms for solving Convex Programming problems (cont.) 3) Sequential-approximation algorithms. These algorithms replace the nonlinear objective function by a succession of linear or quadratic approximations. Particularly suitable for linearly constrained optimization problems. One example is Frank-Wolfe algorithm for the case of linearly constrained convex programming.

More Related