1 / 41

MA2213 Lecture 7

MA2213 Lecture 7. Optimization. Topics. Chebyshev Polynomials pages 165-171. Finding the Minimum of a Function. Gradient of a Function. Method of Steepest Descent. The Best Approximation Problem pages 159-165. Constrained Minimization.

abel-herman
Télécharger la présentation

MA2213 Lecture 7

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MA2213 Lecture 7 Optimization

  2. Topics Chebyshev Polynomials pages 165-171 Finding the Minimum of a Function Gradient of a Function Method of Steepest Descent The Best Approximation Problem pages 159-165 Constrained Minimization http://www.mat.univie.ac.at/~neum/glopt/applications.html http://en.wikipedia.org/wiki/Optimization_(mathematics)

  3. What is “argmin” ?

  4. Optimization Problems Least Squares : given or and a subspace compute or Spline Interpolation : given compute where LS equations page 179are derived using differentiation. Spline equations pages 149-151 are derived similarly.

  5. The Best Approximation Problem p.159 and integer Definition For where Definition The best approximation problem is to compute Best approximation pages 159-165 is more complicated

  6. Best Approximation Examples

  7. Best Approximation Degree 0

  8. Best Approx. Error Degree 0

  9. Best Approximation Degree 1

  10. Best Approx. Error Degree 1

  11. Properties of Best Approximation Figures 4.43 and 4.14 on page 162 display the error for the degree 3 Taylor Approximation (at x = 0) and the error for the Best Approximation of degree 3 over the interval [-1,1] for exp(x), together with the figures in the preceding slides, support assertions on pages 162-163: • Best approximation gives much smaller error than • Taylor approximation. 2. Best approximation error tends to be dispersed over the interval rather that at the end. 3. Best approximation error is oscillatory, it changes sign at least n+1 times in the interval and the sizes of the oscillations will be equal.

  12. Theoretical Foundations Theorem 1. (Weierstrass Approximation Theorem 1885). If and then there exists a polynomial such that Proof Weierstrass’s original proof used properties of solutions of a partial differential equation called the heat equation. A modern, more constructive proof based on Bernstein polynomials is given on pages 320-323 of Kincaid and Cheney’s Numerical Analysis: Mathematics of Scientific Computing, Brooks Cole, 2002. Corollary

  13. Accuracy of Best Approximation If then satisfies Table 4.6 on page 163 compares this upper bound with computed values of and shows that it is about 2.5 times larger.

  14. Theoretical Foundations Theorem 2. (Chebyshev’s Alternation Theorem 1859). If and then iff there exist points in such that where and Proof Kincaid and Cheneypage 416

  15. Sample Problem In Example 4.4.1 on page 160 the author states that the is the best linear function Equivalenty stated mimimax polynomial to on Problem. Use Theorem 2 to prove this statement. in Solution It suffices to find points such that changes sign twice. and the sequence

  16. Sample Problem Step 1. Compute the set Question Can this set be empty ? Observe that if has a maximum at has a either a then therefore maximum or a minimum at is the only point in so can have a maximum. where

  17. Sample Problem Step 2. Observe that therefore might have a maximum at and / or at Equivalently stated The maximum MUST occur at 1, 2, or all 3 points ! Step 3. Compute Step 4. Choose sequence

  18. Remez Exchange Algorithm described in pages 416-419 of Kincaid and Cheney, is based on Theorem 2. Invented by Evgeny Yakovlevich Remez in 1934, it is a powerful computational algorithm that has vast applications in the design of engineering systems such as the tuning filters that allow your TV and Mobile Telephone to tune in to the program of your choice or to listent (only) to the person who calls you. http://en.wikipedia.org/wiki/Remez_algorithm http://www.eepatents.com/receiver/Spec.html#D1 http://comelec.enst.fr/~rioul/publis/199302rioulduhamel.pdf

  19. Chebyshev Polynomials Definition The Chebyshev polynomials are defined by the equation Remark Clearly however, it is NOT obvious that there EXISTS a polynomial that satisfies the equation above for EVERY nonnegative integer !

  20. Triple Recursion Relation derived on pages 167-168 is Result 1. Result 2. Result 3.

  21. Euler and the Binomial Expansion give

  22. Gradients Definition Examples defined by and where http://en.wikipedia.org/wiki/Gradient

  23. Geometric Meaning Result If and then is a unit vector This has a maximum value when and it equals Therefore, the gradient of F at x is a vector in whose direction F has steepest ascent (or increase) and whose magnitude equals the rate of increase. Question : What is the direction of steepest descent ?

  24. Minima and Maxima Theorem (Calculus) If has a minimal or a maximal value then Example If then and so defined by Remark The function satisfies however has no maxima and no minima.

  25. Linear Equations and Optimization Theorem If is symmetric and positive definite then for every the function defined by satisfies the following three properties: 1. 2. has a minimum value therefore it is unique. satisfies 3. Proof Let Since is pos. def.

  26. Linear Equations and Optimization such that Therefore there exists a number is Since the set bounded and closed, there exists such that Therefore, by the preceding calculus theorem Furthermore, since it follows that

  27. Application to Least Squares Geometry and a matrix Theorem Given nonsingular), (or equivalently, with and then the following conditions are equivalent (i) The function has a minimum value at (ii) (iii) this is read as : Bc-y is orthogonal (or perpendicular) to the subspace of spanned by column vectors of

  28. Application to Least Squares Geometry Proof (i) iff (ii) First observe that is symmetric and positive definite. , If F(x) has minimum value at then the preceding theorem implies that (ii) iff (iii) iff This proof that (ii) iff (iii) was emailed to me by Fu Xiang

  29. Steepest Descent Method of Cauchy (1847) is a numerical algorithm to solve compute the following problem: given do following: 1. Start with and for 2. Compute 3. Compute 4. Compute Reference : pages 440-441 Numerical Methods by Dahlquist, G. and Bjorck, A., Prentice-Hall, 1974.

  30. Application of Steepest Descent to minimize the previous function 1. Start with and for do following: 2. Compute 3. Compute 4. Compute

  31. MATLAB CODE function [A,b,y,er] = steepdesc(N,y1) % function [A,b,y,er] = steepdesc(N,y1) A = [1 1;1 2]; b = [2 3]'; dx = 1/10; for i = 1:21 for j = 1:21 x = [(i-1)*dx (j-1)*dx]'; F(i,j) = .5*x'*A*x - b'*x; end end X = ones(21,1)*(0:.1:2); Y = X'; [FX,FY] = gradient(F); contour(X,Y,F,20) hold on quiver(X,Y,FX,FY); y(:,1) = y1; for k = 1:N yk = y(:,k); dk = b - A*yk; tk = dk'*(b-A*yk)/(dk'*A*dk); y(:,k+1) = yk + tk*dk; er(k) = norm(A*y(:,k+1)-b); end plot(y(1,:),y(2,:),'ro')

  32. Graphics of Steepest Descent

  33. Constrained Optimization Problem Minimize subject to a constraint where The Lagrange-multiplier method computes that solves the n-equations and the m-equations This will generally result in a nonlinear system of equations – the topic that discussed in Lecture 9. http://en.wikipedia.org/wiki/Lagrange_multiplier http://www.slimy.com/~steuard/teaching/tutorials/Lagrange.html

  34. Examples 1. Minimize with the constraint Since the method of Lagrange multipliers gives and 2. Maximize where is symmetric and positive definite, subject to the constraint This gives hence is an eigenvector of and and is the largest eigenvalue of Therefore

  35. Homework Due Tutorial 4 (Week 9, 15-19 Oct) 1. Do problem 7 on page 165. Suggestion: practice by doing problem 2 on page 164 and problem 5 on page 165 since these problems are similar and have solutions on pages 538-539. Do NOT hand in solutions for your practice problems. 2. Do problem 10 on pages 170-171. Suggestion: study the discussion of the minimum size property on pages 168-169. Then practice by doing problem 3 on page 169. Do NOT hand in solutions for your practice problems. Extra Credit : Compute Suggestion: THINK about Theorem 2 and problem 3 on page 169.

  36. Homework Due Tutorial 4 (Week 9, 15-19 Oct) 3. The trapezoid method for integrating a function using equal length subintervals can be shown to give an estimate having the form where and the sequence depends where on (a) Show that for any is the estimate for the integral obtained using Simpson’s method with equal length subintervals. (b) Use this fact to together with the form of above to prove that there exists a sequence with (c) Compute constants so that there exists a sequence with

  37. Homework Due Lab 4 (Week 10, 22-26 October) 4. Consider the equations for the 9 variables inside the array where (a) Write these equations as then solve using Gauss Elim. and display the solution in the array. (b) Compute the Jacobi iteration matrix and (c) Write a MATLAB program to implement the Jacobi method for a (n+2) x (n+2) array without computing a sparse matrix A.

  38. Homework Due Tutorial 4 (Week 9, 15-19 Oct) 5. Consider the equation where • Prove that the vectors where are eigenvectors of and compute their eigenvalues. (b) Prove that the Jacobi method for this matrix converges by showing that the spectral radius of the iteration matrix is < 1.

  39. Homework Due Lab 4 (Week 10,22-26 October) 1. (a) Modify the computer code developed for Lab 3 to compute polynomials that interpolate the function 1/(1+x*x) on the interval [-5,5] based on N = 4, 8, 16, and 32 nodes located at the points x(j) = 5 cos((2 j – 1)pi/(2N)), j = 1,…,N. (b) Compare the results with the results you obtained in Lab 3 using uniform nodes. (c) Plot the functions both for the case where the nodes x(j) are uniformly and where they are chosen as above. (d) Show that x(j) / 5 are the zeros of a Chebyshev polynomial, then derive a formula for w(x) and use this formula to explain why the use of the nonuniform nodes x(j) above gives a smaller interpolation error than the use of uniform nodes.

  40. Homework Due Lab 4 (Week 10,22-26 October) 2. (a) Write computer code to compute trapezoidal approximations for and run this code to compute approximations I(n) and associated errors for n = 2, 4, 8, 16, 32, 64 and 128 intervals. (b) Use the (Romberg) formula that you developed in Tutorial 4 to combine I(n), I(2n), and I(4n) for n = 2,4,8,16,32 to develop more accurate approximations R(n). Compute the ratios of consecutive errors (I-I(2n))/(I-I(n)) and (I-R(2n))/(I-R(n)) for n = 2,4,8,16, present them in a table and discuss them (I denotes exact integral). (c) Compute approximations to the integral in (a) using Gauss quadrature with n = 1, 2, 3, 4, and present the errors in a table and compare them to the errors obtained in (a), (b) above.

  41. Homework Due Lab 5 (Week 12, 5-9 November) 3. (a) Use the MATLAB program for Prob4(c)Homework dueTut. 4 to compute the internal variables in the following array for n = 50. that satisfy the inequalities (b) Display the solution using MATLAB mesh&contour commands. (c) Find a polynomial P of two variables so the exact solution satisfies and use it to compute&display the error.

More Related