1 / 36

Non-Linear Programming

Non-Linear Programming. Motivation. Method of Lagrange multipliers Very useful insight into solutions Analytical solution practical only for small problems Direct application not practical for real-life problems because these problems are too large Difficulties when problem is non-convex

nicole
Télécharger la présentation

Non-Linear Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Non-Linear Programming

  2. Motivation • Method of Lagrange multipliers • Very useful insight into solutions • Analytical solution practical only for small problems • Direct application not practical for real-life problems because these problems are too large • Difficulties when problem is non-convex • Often need to search for the solution of practical optimization problems using: • Objective function only or • Objective function and its first derivative or • Objective function and its first and second derivatives

  3. Naïve One-Dimensional Search • Suppose: • That we want to find the value of x that minimizes f(x) • That the only thing that we can do is calculate the value of f(x)for any value of x • We could calculate f(x) for a range of values of x and choose the one that minimizesf(x)

  4. Naïve One-Dimensional Search f(x) • Requires a considerable amount of computing time if range is large and a good accuracy is needed x

  5. One-Dimensional Search f(x) x x0

  6. One-Dimensional Search f(x) x x0 x1

  7. One-Dimensional Search f(x) Current searchrange x x0 x2 x1 If the function is convex, we have bracketed the optimum

  8. One-Dimensional Search f(x) x x0 x2 x1 x3 Optimum is between x1 and x2 We do not need to consider x0 anymore

  9. One-Dimensional Search f(x) x x0 x4 x2 x1 x3 Repeat the process until the range is sufficiently small

  10. One-Dimensional Search f(x) x x0 x4 x2 x1 x3 The procedure is valid only if the function is convex!

  11. Multi-Dimensional Search • Unidirectional search not applicable • Naïve search becomes totally impossible as dimension of the problem increases • If we can calculate the first derivatives of the objective function, much more efficient searches can be developed • The gradient of a function gives the direction in which it increases/decreases fastest

  12. Steepest Ascent/Descent Algorithm x2 x1

  13. Steepest Ascent/Descent Algorithm x2 x1

  14. Unidirectional Search Objective function Gradient direction

  15. Steepest Ascent/Descent Algorithm x2 x1

  16. Steepest Ascent/Descent Algorithm x2 x1

  17. Steepest Ascent/Descent Algorithm x2 x1

  18. Steepest Ascent/Descent Algorithm x2 x1

  19. Steepest Ascent/Descent Algorithm x2 x1

  20. Steepest Ascent/Descent Algorithm x2 x1

  21. Steepest Ascent/Descent Algorithm x2 x1

  22. Steepest Ascent/Descent Algorithm x2 x1

  23. Steepest Ascent/Descent Algorithm x2 x1

  24. Steepest Ascent/Descent Algorithm x2 x1

  25. Steepest Ascent/Descent Algorithm x2 x1

  26. Choosing a Direction • Direction of steepest ascent/descent is not always the best choice • Other techniques have been used with varying degrees of success • In particular, the direction chosen must be consistent with the equality constraints

  27. How far to go in that direction? • Unidirectional searches can be time-consuming • Second order techniques that use information about the second derivative of the objective function can be used to speed up the process • Problem with the inequality constraints • There may be a lot of inequality constraints • Checking all of them every time we move in one direction can take an enormous amount of computing time

  28. x2 How do I know thatI have to stop here? Move in the direction of the gradient x1 Handling of inequality constraints

  29. How do I know thatI have to stop here? Handling of inequality constraints x2 x1

  30. Penalty functions Penalty xmin xmax

  31. Penalty functions Penalty K(x-xmax)2 xmin xmax Replace enforcement of inequality constraints by addition of penalty terms to objective function

  32. Problem with penalty functions Penalty xmin xmax Stiffness of the penalty function must be increased progressively to enforce the constraints tightly enough Not very efficient method

  33. Barrier functions Barrier cost xmin xmax

  34. Barrier functions Barrier cost xmin xmax Barrier must be made progressively closer to the limit Works better than penalty function Interior point methods

  35. Non-Robustness Different starting points may lead to different solutions if theproblem is not convex x2 A D x1 B C Y X

  36. Conclusions • Very sophisticated non-linear programming methods have been developed • They can be difficult to use: • Different starting points may lead to different solutions • Some problems will require a lot of iterations • They may require a lot of “tuning”

More Related