1 / 12

What you can do for one variable, you can do for many (in principle)

What you can do for one variable, you can do for many (in principle). The method of steepest descent (also known as the gradient method) is the simplest example of a gradient based method for minimizing a function of several variables . Its core is the following recursion formula:.

azize
Télécharger la présentation

What you can do for one variable, you can do for many (in principle)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What you can do for one variable, you can do for many (in principle)

  2. The method of steepest descent (also known as the gradient method) is the simplest example of a gradient based method for minimizing a function of several variables. Its core is the following recursion formula: Method of Steepest Descent - Remember: Direction = dk = S(k) = -F(x(k)) Refer to Section 3.5 for Algorithm and Stopping Criteria Advantage: Simple Disadvantage: Seldom converges reliably.

  3. Newton's Method (multi-variable case) No. Why? Remainder is dropped. Significance? T See Sec. 1.4. Not yet. Why? Like the Steepest Descent Method, Newton’s searches in the negative gradient direction. Don’t confuse H-1 with α.

  4. Good properties (fast convergence) if started near solution. However, needs modifications if started far away from solution. Also, (inverse) Hessian is expensive to calculate. To overcome this, several modifications are often made. One of them is to add a search parameter  in from of the Hessian. (similar to steepest descent). This is often referred to as the modified Newton's method. Other modification focus on enhancing the properties of the second and first order gradient combination. Quasi-Newton methods build up curvature information by observing the behavior of the objective functions and its first order gradient. This info is used to generate an approximation of the Hessian. Properties of Newton's Method

  5. Conjugate direction methods can be regarded as somewhat in between steepest descent and Newton's method, having the positive features of both of them. Motivation: Desire to accelerate slow convergence of steepest descent, but avoid expensive evaluation, storage, and inversion of Hessian. Application: Conjugate direction methods are invariably invented and solved for the quadratic problem: Conjugate Directions Method Minimize: (½) xTQx - bTx Note: Condition for optimality isy = Qx - b = 0 or Qx = b (linear equation) Note: Textbook uses “A” instead of “Q”.

  6. So, since the vectors di are independent, the solution to the nxn quadratic problem can be rewritten as x* = 0d0 + ... + n-1 dn-1 Multiplying by Q and by taking the scalar product with di, you can express  in terms of d, Q, and either x* or b Basic Principle Definition: Given a symmetric matrix Q, two vectors d1 and d2 are said to be Q orthogonal or Q conjugate (with respect to Q) if d1TQd2 = 0. Note that orthogonal vectors (d1Td2 = 0)are a special case of conjugate vectors Note that A is used instead of Q in your textbook

  7. The conjugate gradient method is the conjugate direction method that is obtained by selecting the successive direction vectors as a conjugate version of the successive gradients obtained as the method progresses. You generate the conjugate directions as you go along. Search direction @ iteration k. or Conjugate Gradient Method Three advantages: 1) Gradient is always nonzero and linearly independent of all previous direction vectors. 2) Simple formula to determine the new direction. Only slightly more complicated than steepest descent. 3) Process makes good progress because it is based on gradients.

  8. 0 - Starting at any x0define d0 = -g0 = b - Q x0 , where gk is the column vector of gradients of the objective function at point f(xk) 1 - Using dk , calculate the new point xk+1= xk+ akdk , where 2 - Calculate the new conjugate gradient direction dk+1, according to: dk+1= - gk+1+ bkdk where T g d k k a = - k T d Qd k k T g Qd k+1 k b = k T d Qd k k “Pure” Conjugate Gradient Method (Quadratic Case) Note that a is calculated This is slightly different than your current textbook

  9. For non-quadratic cases, you have the problem that you do not know Q, and you would have to make an approximation. One approach is to substitute Hessian H(xk) instead of Q. Problem is that Hessian has to be evaluated at each point. Other approaches avoid the Q completely by using Line Searches Examples: Fletcher-Reeves and Polak-Robiere methods Difference in methods: find ak through line search different formulas for calculating bk than the “pure” Conjugate Gradient algorithm Non-Quadratic Conjugate Gradient Methods

  10. 0 -Starting at any x0define d0 = -g0,where g is the column vector of gradients of the objective function at point f(x) 1 -Using dk , find the new point xk+1= xk+ akdk , where ak is found using a line search that minimizes f(xk+ akdk) 2 - Calculate the new conjugate gradient direction dk+1, according to: dk+1= - gk+1+ bkdk where bkcan vary depending on what (update) formula you use. Fletcher-Reeves: Polak-Robiere: Polak-Robiere & Fletcher Reeves Method for Minimizing f(x) Note: gk+1 is the gradient of the objective function at point xk+1

  11. 0 -Starting at any x0define d0 = -g0,where g is the column vector of gradients of the objective function at point f(x) 1 -Using dk , find the new point xk+1= xk+ akdk , where ak is found using a line search that minimizes f(xk+ akdk) 2 - Calculate the new conjugate gradient direction dk+1, according to: dk+1= - gk+1+ bkdk where Fletcher-Reeves Method for Minimizing f(x) See also Example 3.9 (page 73) in your textbook

  12. Conjugate Gradient Method Advantages Attractive are the simple formulae for updating the direction vector. Method is slightly more complicated than steepest descent, but converges faster. See ‘em in action! For animations of each of ALL preceding search techniques, check out: http://www.esm.vt.edu/~zgurdal/COURSES/4084/4084-Docs/Animation.html

More Related