1 / 49

Modeling methods

Modeling methods. Used in system identification and in MRAS. Linear in parameter models: (Models for describing linear systems). Auto-Regressive Model (AR model) Moving Average Model (MA model) Finite Impulse Response Model (FIR model)

Télécharger la présentation

Modeling methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Modeling methods Used in system identification and in MRAS

  2. Linear in parameter models: (Models for describing linear systems) • Auto-Regressive Model (AR model) • Moving Average Model (MA model) • Finite Impulse Response Model (FIR model) • Auto-Regressive model with extraneous (extra) input (ARX model) • Auto-Regressive Moving Average model (ARMA model) • Auto-Regressive Moving Average model with extraneous input (ARMAX model) • Auto-Regressive Integrated Moving Average model (ARIMA model) • Auto-Regressive Integrated Moving Average model with extraneous input (ARIMAX • model)

  3. Contd.. • Each of the above models has a way of describing the relationship between the input, output and error. • Some models have much freedom in describing the input, • some have freedom in describing the error • Others have freedom in describing the output • certain models which describe input, output and error with freedom. • Based on the plant conditions, a particular model can be chosen. • The choice of a suitable model for a plant is very important, since the parameters to be estimated depends on the model chosen. • great care needs to be taken in the choice of model for description of the plant.

  4. Auto-Regressive Model (AR model) • The Auto-regressive model is given by the equation Where,

  5. This model describes a relation only between the output and error. • The freedom in describing the output is more than the error. • This method is used to describe a plant as the input is not described here, but is usually used in combination with other models. • The block diagram representation of AR model is given by: The parameter vector to be estimated in this model is

  6. Moving Average Model (MA model) • The moving average model is given by the equation

  7. This model describes a relation only between the output and error. • This model is called Moving Average model because the error here is expressed as a moving average of the white noise.

  8. Much freedom is given for the description of error than the output. • This method is seldom used to describe a plant as the input is not described here, but is usually used in combination with other models. • The block diagram representation of MA model is given by: parameter vector to be estimated in this model is

  9. Finite Impulse Response Model (FIR model) • The Finite Impulse Response model is given by the equation:

  10. This model describes a relation between the input, error and output. • The input can be described with much freedom compared to the error and output. • This model can be used to describe plants where much freedom is not required for the description of errors. • The block diagram representation of FIR model is: The parameter vector to be estimated in this model is

  11. Auto-Regressive model with extraneous (extra) input (ARX model) • Known as Equation Error Model, is given by:

  12. this model describes a relation between the input, error and output. • Also, the input and output can be described with much freedom compared to the error. • This model can be used to describe plants where much freedom is not required for the description of errors. • The block diagram representation of ARX model is:

  13. Auto-Regressive Moving Average Model (ARMA Model) • Model is described by the equation

  14. This model is a combination of Auto-Regressive (AR) model and Moving Average (MA) model. • This model gives a relation between output and error. Here both output and error are described with much freedom. • This model is not often used to describe plants as input is not • considered here. • The block diagram representation of ARMA model is:

  15. Auto-Regressive Moving Average Model with extraneous input (ARMAX Model) • Described by equation

  16. An extension of ARMA model, where an extraneous input (u(t)) is included to the model. • Used to describe the plant, as this model describes the input, error and output with full freedom. • The block diagram representation of ARMAX model is:

  17. Auto-Regressive Integrated Moving Average Model with eXtraneous input(ARIMAX Model) • The models described above are valid only for white noise disturbances. To be able to describe the disturbances which are variable in nature or drifting in nature, ARIMAX model is preferred. • The equation describing the ARIMAX model is:

  18. In this, the disturbance is described as a summation of constant part and variable part. • Hence, this model can be used to describe systems where the disturbance is drifting in nature. • The parameter vector to be estimated

  19. Note

  20. Parametric Estimation Techniques • A parametric estimation technique is characterized by a finite dimensional parameter vector. • A mapping from the recorded data to the estimated parameter vector. • So, in parametric methods, the result of identification can be expressed by a finite dimensional parameter vector in matrix form. • Some of the parametric estimation techniques are: • Least Squares (LS) Estimation • Recursive Least Squares (RLS) Estimation • Extended Least Squares (ELS) Estimation and • Least Mean Square (LMS) Estimation

  21. Least Square Estimation • Karl Friedrich Gauss • least squares principle: • Stated that “the unknown parameter of a mathematical model should be chosen in such a way that the sum of squares of the difference between the actually observed and computed values, multiplied by numbers that measure degree of precision, is minimal”.

  22. simple for a mathematical model that can be written in the form

  23. called a regression model. • The model is indexed by the variable i, which often denotes time. • The variables called the regression variables which is usually a set of inputs. • As per Least Squares principle, the parameter vector should be chosen to minimize the Least-Square loss function given by:

  24. The first term on the right hand side is independent of θ. The second term is always positive. • Hence the minimum is obtained for:

  25. Example

  26. Statistical Properties of Least Square Estimation Technique

  27. Recursive Least Square (RLS) Estimation • In adaptive controllers, the observations are obtained sequentially in real time. • to save computation time, the computation can be made recursive in nature. • The computation of least square estimate can be arranged in such a way that the results obtained at time t – 1 can be used to get the estimates at time t.

  28. RLS technique for time varying parameters • In the Least Square model given by equation the parameters are assumed to be constant, • but in practical situations, they are time-varying in nature. • The least squares method can be extended for the following two cases: • The parameters are assumed to change abruptly but infrequently. • The parameters are changing continuously but slowly.

  29. Parameter changes abruptly but infrequently: • The case of abrupt parameter changes can be covered by resetting. • The matrix P in the least squares algorithm is periodically reset to αI, where α is a large number. • This implies that the gain K (t) in the estimator becomes large and the estimate can be updated with a larger step. • more sophisticated version is to run n estimators in parallel, which are reset sequentially. • The estimate is then chosen by using some decision logic.

  30. Parameters are slowly time-varying in nature: • The case of slowly time-varying parameters can be covered by relatively simple mathematical models. The loss function, in this case is taken to be: parameter λ is called the forgetting factor or discounting factor. 0 < λ ≤ 1. The method is therefore called exponential forgetting or exponential discounting.

  31. Simplified Algorithms (Algorithms that avoid Updating of the P matrix) • The recursive least-squares algorithm has two sets of state variables and P which must be updated at each step. • For large n, the updating of matrix P dominates the computing effort. • There are several simplified algorithms that avoid updating the P matrix at the cost of slower convergence. • Some algorithms that avoid updating of the P matrix are: • Kaczmarz’s Projection algorithm • Projection Algorithm (Normalized projection algorithm) • Stochastic approximation algorithm and • Least mean square algorithm.

  32. Kaczmarz’s Projection algorithm:

  33. Projection algorithm: (Normalized Projection Algorithm)

  34. Stochastic approximation algorithm:

  35. Least mean square algorithm • A further simpler algorithm is obtained which eliminates the term P(t). The algorithm is the least mean square algorithm given by Limitations of Standard Least Squares Algorithm: it can be directly applied only for systems, which can be expressed in terms of the regression model. To apply Least Squares principle to a system, it needs to be first converted to regression model.

  36. Extended Least Squares Estimation Algorithm:

More Related