1 / 14

Unconstrained Optimization

Unconstrained Optimization. Objective: Find minimum of F( X ) where X is a vector of design variables We may know lower and upper bounds for optimum No constraints. Outline. General optimization strategy Optimization of second degree polynomials Zero order methods Random search

Télécharger la présentation

Unconstrained Optimization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Unconstrained Optimization • Objective: Find minimum of F(X) where X is a vector of design variables • We may know lower and upper bounds for optimum • No constraints

  2. Outline • General optimization strategy • Optimization of second degree polynomials • Zero order methods • Random search • Powell’s method • First order methods • Steepest descent • Conjugate gradient • Second order methods

  3. General optimization strategy Start q=0 q=q+1 Pick search direction, Sq One dimensional search xq=x q-1+*qSq Converged ? Exit

  4. Optimization of second-degree polynomials • Quadratic: F(X)=a11x12+a12x1x2+…+annxn2 = {X}T[A]{X} [A] is equal to one half the Hessian matrix, [H] • There is a linear transformation {X}=[S]{Y} such that: F(Y)= 1y12+...+ nyn2 (no coupling terms) • [S]: columns are eigenvectors of [A], S1, …,Sn • S1, …,Sn are also eigenvectors of [H]

  5. Optimization of Second-degree polynomials • Define conjugate directionsS1, …,Sn • S1, …,Sn are otrhogonal ( i.e. their dot products are zero) because matrix [A] is symmetric • Note that conjugate directions are also linearly independent. The orthogonality property is stronger than the linear independence property; orthogonal vectors are always linearly independent but linearly independent vectors are not necessarily orthogonal. • i: eigenvalues of [A], which are equal to one half of the eigenvalues of the Hessian matrix

  6. Optimization of second-degree polynomials • We can find the exact minimum of a second degree polynomial by performing n one-dimensional searches in the conjugate directions S1, …,Sn • If all eigenvalues of [A] are positive then a second degree polynomial has a unique minimum

  7. Zero-order methods; random search • Random number generator: generates sample of values of variables drawn for a spcified probbility distribution. Available in most programming languages. • Idea: For F(x1,…, xn), generate random n-tuples {x11,…,xn1}, {x12,…,xn2},…, {x1N,…,xnN}. Find minimum.

  8. Powell’s method • Efficient, reliable, popular • Based on conjugate directions, although it does not use Hessian matrix

  9. Searching for optimum in Powell’s method S1 • First iteration:S1-S3 • Second iteration: S4-S6 • Directions S3, S6 conjugate • Present iteration: use last two search directions from previous iteration S2 S3 S4 S5 S6

  10. Powell’s method: algorithm x0 One iteration, n+1 one dimensional searches Define set of n search directions Sq coordinate unit vectors, q=1,…,n Find conjugate direction Sq+1=xq-y x=x0, y=x Find * to min F(xq+ * Sq+1) q=0 xq+1= (xq+ * Sq+1) q=q+1 Y Converged ? Exit Find * to min F(xq-1+ * Sq) N Update search directions Sq=Sq+1 q=1,…,n xq= (xq-1+ * Sq) Y N y=xq+1 q=n ?

  11. Powell’s method • Second degree polynomial; optimum in n iterations • Each iteration involves n+1 one-dimensional searches • n(n+1) one dimensional searches total

  12. First-order methods: Steepest Descent • Idea: Search in the direction of the negative gradient, • Starting from a design move by a small amount. Objective function reduces most along the direction of

  13. Determine steepest descent direction x0 S= - Find * to min F(x+ * S) Update design x=x+ * S No Converged ? Yes Stop Algorithm Perform one-dimesnional minimization in steepest descent direction

  14. Steepest Descent • Pros: Easy to implement, robust, makes quick progress in the beginning of optimization. • Cons: Too slow toward end of optimization

More Related