1 / 34

Engineering Optimization

Concepts and Applications. Engineering Optimization. Fred van Keulen Matthijs Langelaar CLA H21.1 A.vanKeulen@tudelft.nl. Optimization problem. Negative null form. Model. Definition. Checking. Recap / overview. Special topics. Linear / convex problems. Sensitivity analysis.

lecea
Télécharger la présentation

Engineering Optimization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Concepts and Applications Engineering Optimization • Fred van Keulen • Matthijs Langelaar • CLA H21.1 • A.vanKeulen@tudelft.nl

  2. Optimization problem Negative null form Model Definition Checking Recap / overview Special topics Linear / convex problems Sensitivity analysis Topology optimization Solution methods Unconstrained problems Constrained problems Optimality criteria Optimality criteria Optimization algorithms Optimization algorithms

  3. First Order Necessity Condition: • Second Order Sufficiency Condition: H positive definite • For convex f in convex feasible domain: condition for global minimum: • Sufficiency Condition: Summary optimality conditions • Conditions for local minimum of unconstrained problem:

  4. Stationary point nature summary Definiteness H Nature x* Positive d. Minimum Positive semi-d. Valley Indefinite Saddlepoint Negative semi-d. Ridge Negative d. Maximum

  5. Complex eigenvalues? • Question: what is the nature of a stationary point when H has complex eigenvalues? • Answer: this situation never occurs, because H is symmetric by definition. Symmetric matrices have real eigenvalues (spectral theory).

  6. F l k1 k2 Nature of stationary points • Nature of initial position depends on load (buckling):

  7. Nature of stationary points (2)

  8. Unconstrained optimization algorithms • Single-variable methods • 0th order (involving only f ) • 1st order (involving f and f ’ ) • 2nd order (involving f, f ’ and f ” ) • Multiple variable methods

  9. Example: Stationary points: Why optimization algorithms? • Optimality conditions often cannot be used: • Function not explicitly known (e.g. simulation) • Conditions cannot be solved analytically

  10. Weaknesses: • (Usually) less efficient than higher order methods (many function evaluations) 0th order methods: pro/con • Strengths: • No derivatives needed • Work also for discontinuous / non-differentiable functions • Easy to program • Robust

  11. Setting: Iterative process: f Model Optimizer x Minimization with one variable • Why? • Simplest case: good starting point • Used in multi-variable methods during line search

  12. xa xb Termination criteria • Stop optimization iterations when: • Solution is sufficiently accurate (check optimality criteria) • Progress becomes too slow: • Maximum resources have been spent • The solution diverges • Cycling occurs

  13. n points: Final interval size = Ln f x Brute-force approach • Simple approach: exhaustive search • Disadvantage: rather inefficient L0

  14. Basic strategy of 0th order methods for single-variable case • Find interval [a0, b0] that contains the minimum (bracketing) • Iteratively reduce the size of the interval [ak, bk] (sectioning) • Approximate the minimum by the minimum of a simple interpolation function over the interval [aN, bN] • Sectioning methods: • Dichotomous search • Fibonacci method • Golden section method

  15. [a0, b0] x1 x2 = x1+D x3 = x2+gD x4 = x3+g2D Bracketing the minimum f x Starting point x1, stepsize D, expansion parameter g: user-defined

  16. Unimodality • Bracketing and sectioning methods work best for unimodal functions:“An unimodal function consists of exactly one monotonically increasing and decreasing part”

  17. L0 d << L0 a0 b0 L0/2 Dichotomous search Main Entry: di·chot·o·mousPronunciation: dI-'kät-&-m&s also d&-Function: adjective: dividing into two parts • Conceptually simple idea: • Try to split interval in half in each step

  18. Interval size after m steps (2m evaluations): • Proper choice for d : Dichotomous search (2) • Interval size after 1 step (2 evaluations): L0

  19. m Ideal interval reduction Dichotomous search (3) • Example: m = 10

  20. Fibonacci,1180?-1250? x4 x4 Sectioning - Fibonacci • Situation: minimum bracketed between x1 and x3: x1 x2 x3 • Test new points and reduce interval • Optimal point placement?

  21. Optimal sectioning • Fibonacci method: optimal sectioning method • Given: • Initial interval [a0, b0] • Predefined total number of evaluations N,or: • Desired final interval size e

  22. IN IN-2 = 3IN IN-3 = 5IN d << IN IN-4 = 8IN IN-5 = 13IN Fibonacci number Fibonacci sectioning - basic idea • Start at final interval and use symmetry and maximum interval reduction: IN-1 = 2IN

  23. Golden section method uses this constant interval reduction ratio f f 1 f 1 Sectioning – Golden Section • For large N, Fibonacci fraction b converges to golden section ratio f (0.618034…):

  24. I1 I2 = fI1 I2 = fI1 I3 = fI2 • Final interval: Sectioning - Golden Section • Origin of golden section:

  25. Evaluations Example: reduction to 2% of original interval: Ideal dichotomous interval reduction N Dichotomous 12 Golden section 9 Fibonacci 8 (Exhaustive 99) Golden section Fibonacci Comparison sectioning methods • Conclusion: Golden section simple and near-optimal

  26. New point evaluated atminimum of parabola: ai+1 xnew bi+1 Quadratic interpolation • Three points of the bracket define interpolating quadratic function: ai bi • For minimum: a > 0! • Shift xnew when very close to existing point

  27. Unconstrained optimization algorithms • Single-variable methods • 0th order (involving only f ) • 1st order (involving f and f ’ ) • 2nd order (involving f, f ’ and f ” ) • Multiple variable methods

  28. Cubic interpolation • Similar to quadratic interpolation, but with 2 points and derivative information: ai bi

  29. f’ f Bisection method • Optimality conditions: minimum at stationary point Root finding of f ’ • Similar to sectioning methods, but uses derivative:

  30. Uses linear interpolation f ’ Secant method • Also based on root finding of f ’

  31. Unconstrained optimization algorithms • Single-variable methods • 0th order (involving only f ) • 1st order (involving f and f ’ ) • 2nd order (involving f, f ’ and f ” ) • Multiple variable methods

  32. Linear approximation • New guess: Newton’s method • Again, root finding of f ’ • Basis: Taylor approximation of f ’:

  33. f’ xk+1 xk+2 xk xk+2 xk xk+1 Newton’s method • Best convergence of all methods: f’ • Unless it diverges

  34. In practice: additional “tricks” needed to deal with: • Multimodality • Strong fluctuations • Round-off errors • Divergence Summary single variable methods • Bracketing + • Dichotomous sectioning • Fibonacci sectioning • Golden ratio sectioning • Quadratic interpolation • Cubic interpolation • Bisection method • Secant method • Newton method 0th order 1st order 2nd order • And many, many more!

More Related