1 / 125

Chapter 2-OPTIMIZATION

Chapter 2-OPTIMIZATION. G.Anuradha. Contents. Derivative-based Optimization Descent Methods The Method of Steepest Descent Classical Newton’s Method Step Size Determination Derivative-free Optimization Genetic Algorithms Simulated Annealing Random Search Downhill Simplex Search.

keon
Télécharger la présentation

Chapter 2-OPTIMIZATION

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 2-OPTIMIZATION G.Anuradha

  2. Contents • Derivative-based Optimization • Descent Methods • The Method of Steepest Descent • Classical Newton’s Method • Step Size Determination • Derivative-free Optimization • Genetic Algorithms • Simulated Annealing • Random Search • Downhill Simplex Search

  3. What is Optimization? • Choosing the best element from some set of available alternatives • Solving problems in which one seeks to minimize or maximize a real function

  4. Notation of Optimization Optimize y=f(x1,x2….xn) --------------------------------1 subject to gj(x1,x2…xn) ≤ / ≥ /= bj ----------------------2 where j=1,2,….n Eqn:1 is objective function Eqn:2 a set of constraints imposed on the solution. x1,x2…xn are the set of decision variables Note:- The problem is either to maximize or minimize the value of objective function.

  5. Complicating factors in optimization • Existence of multiple decision variables • Complex nature of the relationships between the decision variables and the associated income • Existence of one or more complex constraints on the decision variables

  6. Types of optimization • Constraint:- Solution is arrived at by maximizing or minimizing the objective function • Unconstraint:- No constraints are imposed on the decision variables and differential calculus can be used to analyze them

  7. Least Square Methods for System Identification • System Identification:- Determining a mathematical model for an unknown system by observing the input-output data pairs • System identification is required • To predict a system behavior • To explain the interactions and relationship between inputs and outputs • To design a controller • System identification • Structure identification • Parameter identification

  8. Structure identification • Apply a priori knowledge about the target system to determine a class of models within which the search for the most suitable model is conducted • y=f(u;θ) y – model’s output u – Input Vector θ – parameter vector

  9. Parameter Identification • Structure of the model is known and optimization techniques are applied to determine the parameter vector θ= θ

  10. Block diagram of parameter identification

  11. Parameter identification • An input ui is applied to both the system and the model • Difference between the target system’s output yi and model’s output yi is used to update a parameter vector θ to minimize the difference • System identification is not a one-pass process; it needs to do both structure and parameter identification repeatedly

  12. Classification of Optimization algorithms • Derivative-based algorithms:- • Derivative-free algorithms

  13. Characteristics of derivative free algorithm • Derivative freeness:- repeated evaluation of objective function • Intuitive guidelines:- concepts are based on nature’s wisdom, such as evolution and thermodynamics • Slower • Flexibility • Randomness:- global optimizers • Analytic Opacity:-knowledge about them are based on empirical studies • Iterative nature:-

  14. Characteristics of derivative free algorithm • Stopping condition of iteration:- let k denote an iteration count and fk denote the best objective function obtained at count k. stopping condition depends on • Computation time • Optimization goal; • Minimal Improvement • Minimal relative improvement

  15. Basics of Matrix Manipulation and Calculus

  16. Basics of Matrix Manipulation and Calculus

  17. Gradient of a Scalar Function

  18. Jacobian of a Vector Function

  19. Least Square Estimator • Method of least squares is a standard approach to approximate solution of overdetermined systems. • Least Squares- Overall solution minimizes the sum of the squares of the errors made in solving every single equation • Application—Data Fitting

  20. Types of Least Squares • Least Squares • Linear:- It is a linear combination of parameters. • The model may represent a straight line, a parabola or any other linear combination of functions • Non-Linear:- the parameters appear as functions, such as β2,eβx.If the derivatives are either constant or depend only on the values of the independent variable, the model is linear else non-linear.

  21. Differences between Linear and Non-Linear Least Squares

  22. Linear regression with one variable Model representation Machine Learning

  23. Housing Prices (Portland, OR) Price (in 1000s of dollars) Size (feet2) Supervised Learning Given the “right answer” for each example in the data. Regression Problem Predict real-valued output

  24. Training set of housing prices (Portland, OR) Notation: m = Number of training examples x’s = “input” variable / features y’s = “output” variable / “target” variable

  25. Training Set How do we represent h ? Learning Algorithm Size of house Estimated price h Linear regression with one variable. Univariate linear regression.

  26. Linear regression with one variable Cost function Machine Learning

  27. Training Set Hypothesis: ‘s: Parameters How to choose ‘s ?

  28. y Idea: Choose so that is close to for our training examples x

  29. Linear regression with one variable Costfunctionintuition I Machine Learning

  30. Simplified Hypothesis: Parameters: Cost Function: Goal:

  31. (for fixed , this is a function of x) (function of the parameter ) y x

  32. (function of the parameter ) (for fixed , this is a function of x) y x

  33. (function of the parameter ) (for fixed , this is a function of x) y x

  34. Linear regression with one variable Costfunctionintuition II Machine Learning

  35. Hypothesis: Parameters: Cost Function: Goal:

  36. (for fixed , this is a function of x) (function of the parameters ) Price ($) in 1000’s Size in feet2 (x)

  37. (for fixed , this is a function of x) (function of the parameters )

  38. (for fixed , this is a function of x) (function of the parameters )

  39. (for fixed , this is a function of x) (function of the parameters )

  40. (for fixed , this is a function of x) (function of the parameters )

  41. Linear regression with one variable Gradient descent Machine Learning

  42. Have some function Want • Outline: • Start with some • Keep changing to reduce until we hopefully end up at a minimum

More Related